venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Mesh-free Eulerian Physics-Informed Neural Networks Abstract Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation. 1 INTRODUCTION Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs. The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs u(t(i),x(i)) + ϵ(i) = u (i) obs (4) with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings. Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e. Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5) with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner. Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain, Lf (Θ)= 1 |[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6) with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network. To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2). Main contributions. The main contributions of this paper are as follows: • We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance. • In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions. • The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems. • We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible. 2 RELATED WORK Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism. Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987). However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions. Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫ ∆x ∫ ∆v Ψ(t,x,v)dvdx (7) is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946). 3 PARTICLE-DENSITY PINNS In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain. The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995). Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled. Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues: 1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ. 2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.). In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh. Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2: Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9) This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density. In summary, we propose to draw collocation points from the normalized density: (ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10) The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x): Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11) In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd. Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through ∂tρ+∇ · (ρv) = 0. (12) Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs. 4 MODEL AND IMPLEMENTATION A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5). Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14) In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available. Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute. For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1. 5 EXPERIMENTS In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods. The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material. 5.1 MASS CONSERVATION FOR SIMULATED PARTICLES As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary. For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples. As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals. 5.2 HEAT EQUATION We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4. Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points. 5.3 FOKKER-PLANCK EQUATION For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process: dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15) with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by ∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] + ∂2 ∂x2 [D(t, x)p(t, x)] . (16) We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5. 6 CONCLUSION In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries. As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations. A APPENDIX A.1 BACKGROUND SAMPLING FOR PDPINNS At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution: x ∼ pbg(t,x) = p(t)pbg(x|t) (17) with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice. We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training. A.2 IMPLEMENTATION OF RAR AND OT-RAR For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers. Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB, number of col. points k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ Output: Xnew A.3 EXPERIMENTS: CONSERVATION OF MASS In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files. All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes. Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB, number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution Output: Xmap A.3.1 ADDITIONAL EXPERIMENTAL RESULTS 3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of 2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below. A.3.2 DATA GENERATION Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”. Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2: vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18) For both experiments the following fields were used: Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19) a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20) The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]: vxyt ( t, [ x y ]) = vxy ([ x y ])( 3 2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21) The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4 3 πt ) . (22) Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme. 1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d. Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used. A.3.3 ARCHITECTURE AND TRAINING In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs. For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds. A.4 EXPERIMENTS: HEAT EQUATION The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs. Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments. A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions. More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability: I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021) III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2) A.5.1 SETTING AND ANALYTICAL SOLUTION We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24) p(x|t = t0) = N (0, 0.022 · Id) (25) The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27) µs(t) = − cos(10t) 10 + cos(10) 10 (28) σ2s(t) = 0.0036t+ 0.004. (29) For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t). A.5.2 SETUP We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor. A.5.3 WALL TIME The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed.
1. What is the focus and contribution of the paper regarding physics-informed methods for solving PDEs? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and applicability? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the extension of the general framework of having PDEs as constraints in the loss function? 5. Can the proposed method be applied to other areas beyond PDEs, and what would be the potential challenges and limitations?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a network based method, in the realm of physics informed methods, to solve a class of PDEs. The proposed methods mainly extend the general framework of having PDE as constraints in the loss function by introducing a partial density and locality of particle interactions. It also proposes to use MCMC method to sample the collocation points. Strengths And Weaknesses The strengths of this paper are: 1 . The idea of using particle density and emphasizing locality of interactions in loss function is novel. It provides a rigorous view of defining a specific loss function for general PDE constrained loss function (of neural networks). The proposed method also incorporate a MCMC sampling strategy, which supports the idea very well. 2 . The paper is very well organized and clearly written. The literature review is comprehensive along with arguments and to introduce the motivation of the proposed method. Overall, the paper is easy to read for audience in a broad area of deep learning. The weakness of this paper: 1 . There lacks novel proposal of network architecture, which might limit the novelty of this paper. Here, it is worthwhile noting that the novel definition of loss function does qualify as a good contribution. However, the architecture of neural network is not novel. 2 . Author discussed the applicability of the propose methods. However, the limitation of applicability seems to be obvious as the authors stated. But, it is rather an unfamiliar area for me. Clarity, Quality, Novelty And Reproducibility ⋅ Regarding the clearness, this paper is very clearly written with a lot of details in each aspect. ⋅ The novelty, as stated above, is mostly on the proposal of defining a specific loss function framework than a novel proposal of neural network architecture, later of which could limit the novelty for a paper targeting at the top venue. ⋅ The reproducibility is feasible, and the authors also attached code in the supplementary materials.
ICLR
Title Mesh-free Eulerian Physics-Informed Neural Networks Abstract Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation. 1 INTRODUCTION Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs. The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs u(t(i),x(i)) + ϵ(i) = u (i) obs (4) with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings. Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e. Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5) with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner. Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain, Lf (Θ)= 1 |[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6) with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network. To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2). Main contributions. The main contributions of this paper are as follows: • We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance. • In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions. • The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems. • We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible. 2 RELATED WORK Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism. Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987). However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions. Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫ ∆x ∫ ∆v Ψ(t,x,v)dvdx (7) is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946). 3 PARTICLE-DENSITY PINNS In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain. The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995). Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled. Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues: 1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ. 2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.). In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh. Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2: Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9) This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density. In summary, we propose to draw collocation points from the normalized density: (ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10) The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x): Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11) In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd. Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through ∂tρ+∇ · (ρv) = 0. (12) Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs. 4 MODEL AND IMPLEMENTATION A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5). Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14) In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available. Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute. For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1. 5 EXPERIMENTS In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods. The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material. 5.1 MASS CONSERVATION FOR SIMULATED PARTICLES As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary. For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples. As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals. 5.2 HEAT EQUATION We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4. Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points. 5.3 FOKKER-PLANCK EQUATION For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process: dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15) with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by ∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] + ∂2 ∂x2 [D(t, x)p(t, x)] . (16) We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5. 6 CONCLUSION In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries. As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations. A APPENDIX A.1 BACKGROUND SAMPLING FOR PDPINNS At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution: x ∼ pbg(t,x) = p(t)pbg(x|t) (17) with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice. We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training. A.2 IMPLEMENTATION OF RAR AND OT-RAR For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers. Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB, number of col. points k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ Output: Xnew A.3 EXPERIMENTS: CONSERVATION OF MASS In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files. All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes. Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB, number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution Output: Xmap A.3.1 ADDITIONAL EXPERIMENTAL RESULTS 3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of 2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below. A.3.2 DATA GENERATION Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”. Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2: vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18) For both experiments the following fields were used: Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19) a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20) The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]: vxyt ( t, [ x y ]) = vxy ([ x y ])( 3 2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21) The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4 3 πt ) . (22) Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme. 1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d. Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used. A.3.3 ARCHITECTURE AND TRAINING In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs. For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds. A.4 EXPERIMENTS: HEAT EQUATION The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs. Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments. A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions. More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability: I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021) III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2) A.5.1 SETTING AND ANALYTICAL SOLUTION We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24) p(x|t = t0) = N (0, 0.022 · Id) (25) The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27) µs(t) = − cos(10t) 10 + cos(10) 10 (28) σ2s(t) = 0.0036t+ 0.004. (29) For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t). A.5.2 SETUP We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor. A.5.3 WALL TIME The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed.
1. What is the focus and contribution of the paper regarding Physics Informed Neural Networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to mass conservation, heat equations, and Fokker-Planck equations? 3. Why does the paper use expected molecular distribution function instead of KL divergence for training? 4. Can you justify the choice of quartile values for hyperparameter tuning? 5. Why did the authors choose a different version of the advection problem in section 5.1, instead of showing the application to the soliton problem that struggles with PINNs? 6. Is MCMC the right approach for this problem, or does it make the problem worse due to cost issues? 7. How can the authors improve their experimental results to better support their claims? 8. Are there any mathematical arguments or proofs on why training for expected distribution is good? 9. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes the use of the expected molecular distribution function as the loss function to train a Physics Informed Neural Network. Then, the authors use MCMC to sample the molecular distribution function to compute the expectation using the output of the neural network for the molecular distribution function. The authors show three applications: mass conservation for simulated particles, heat equation and fokker planck equation (stochastic differential equation). Strengths And Weaknesses The strength of the paper is the realization that existing PINNs with uniform sampling of the solution space is not sufficient for several applications and clearly discussed with the advection equation. The use of the expected molecular distribution function is also interesting. The weaknesses of the paper are as follows: Why is the expected value of molecular distribution used? In the third application, the KL divergence is used for performance evaluation. Why not use that for the training itself. Are there any mathematical arguments/proofs on why training for expected distribution is good. For the hyperparamenter tuning, the quartile values are used. The choice seems arbitrary and not well explained or justified. The advection equation issue is well explained. So the reviewer (and any reader) will expect that the advection problem will be shown in the applications, at least in the appendix. The chosen application in section 5.1 seems to be a different version (not a soliton that struggles with PINNs). MCMC (or any variant including HMC) itself is too expensive to sample from. The authors also allude to the memory requirements in their high-end GPUs. Is MCMC the right approach for this problem? Is it not making the problem worse? Now if not expectation and some other information metric is used, the cost will go up more! Clarity, Quality, Novelty And Reproducibility The motivation of the problem and approach are well explained. However, the use of a different loss function for PINN that is sampled from MCMC is not a novel solution to the main issue identified. The experiments also do not support the resolution of the problem mentioned. The choice of the statistics for the loss function is arbitrary and should have been better justified. The hyperparameter choices especially the comments on choosing c in equation 14 are not well justified and reproducible. The paragraphs of the Main contributions 2 is not sufficiently justified. How do the authors avoid the conceptual challenges of classical mesh-free methods? A discussion is missing and experiments do not show this point. The third point is not a contribution and not supported by experiments as well.
ICLR
Title Mesh-free Eulerian Physics-Informed Neural Networks Abstract Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation. 1 INTRODUCTION Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs. The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs u(t(i),x(i)) + ϵ(i) = u (i) obs (4) with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings. Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e. Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5) with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner. Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain, Lf (Θ)= 1 |[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6) with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network. To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2). Main contributions. The main contributions of this paper are as follows: • We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance. • In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions. • The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems. • We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible. 2 RELATED WORK Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism. Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987). However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions. Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫ ∆x ∫ ∆v Ψ(t,x,v)dvdx (7) is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946). 3 PARTICLE-DENSITY PINNS In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain. The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995). Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled. Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues: 1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ. 2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.). In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh. Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2: Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9) This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density. In summary, we propose to draw collocation points from the normalized density: (ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10) The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x): Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11) In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd. Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through ∂tρ+∇ · (ρv) = 0. (12) Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs. 4 MODEL AND IMPLEMENTATION A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5). Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14) In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available. Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute. For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1. 5 EXPERIMENTS In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods. The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material. 5.1 MASS CONSERVATION FOR SIMULATED PARTICLES As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary. For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples. As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals. 5.2 HEAT EQUATION We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4. Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points. 5.3 FOKKER-PLANCK EQUATION For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process: dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15) with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by ∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] + ∂2 ∂x2 [D(t, x)p(t, x)] . (16) We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5. 6 CONCLUSION In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries. As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations. A APPENDIX A.1 BACKGROUND SAMPLING FOR PDPINNS At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution: x ∼ pbg(t,x) = p(t)pbg(x|t) (17) with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice. We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training. A.2 IMPLEMENTATION OF RAR AND OT-RAR For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers. Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB, number of col. points k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ Output: Xnew A.3 EXPERIMENTS: CONSERVATION OF MASS In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files. All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes. Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB, number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev. Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution Output: Xmap A.3.1 ADDITIONAL EXPERIMENTAL RESULTS 3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of 2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below. A.3.2 DATA GENERATION Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”. Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2: vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18) For both experiments the following fields were used: Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19) a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20) The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]: vxyt ( t, [ x y ]) = vxy ([ x y ])( 3 2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21) The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4 3 πt ) . (22) Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme. 1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d. Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used. A.3.3 ARCHITECTURE AND TRAINING In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs. For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds. A.4 EXPERIMENTS: HEAT EQUATION The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs. Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments. A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions. More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability: I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021) III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2) A.5.1 SETTING AND ANALYTICAL SOLUTION We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24) p(x|t = t0) = N (0, 0.022 · Id) (25) The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27) µs(t) = − cos(10t) 10 + cos(10) 10 (28) σ2s(t) = 0.0036t+ 0.004. (29) For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t). A.5.2 SETUP We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor. A.5.3 WALL TIME The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed.
1. What is the focus and contribution of the paper on PINNs? 2. What are the strengths of the proposed approach, particularly in terms of sampling? 3. What are the weaknesses of the paper regarding algorithmic choices and ablation studies? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This manuscript proposes an interesting idea based on the observation that PINNs are normally trained using uniform sampling, which might not be the optimal case. Using the proposed method, the region of interest will be naturally sampled with a higher density. This manuscript applied their method to a set of common PDE problems, including Fokker-Planck equations. Strengths And Weaknesses Strength: This manuscript is well-written. The proposed method is easy to follow: it is true that the sampling matters in training PINNs. The attached code showed good reproducibility. The experiments validating this method are extensive. Weaknesses: There are multiple algorithmic choices not justified in the manuscripts. For example, why use the square as the non-negative translator instead of the absolute? More ablation studies are expected to examine them. Clarity, Quality, Novelty And Reproducibility The manuscript is nicely written. The code is attached. I can reproduce some of the results.
ICLR
Title Progressive Compressed Records: Taking a Byte Out of Deep Learning Data Abstract Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. N/A Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. 1 INTRODUCTION Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network). A plethora of work has investigated scaling deep learning from a compute- or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Zhu et al., 2018; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced. Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018). For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018), which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012). This, combined with the memory wall—a lack of bandwidth between compute and memory—suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010). The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes. In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets. Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity. PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application’s needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels. Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level. As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy. Overall, we make the following contributions: 1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks. 2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data. PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth. 3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression. This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance. 2 BACKGROUND Two complementary concepts make up the process of storing training data: the layout of the data on the storage medium and the representation of the data. Data layout is important because it can help fully utilize the bandwidth potential of the underlying storage system. Data representation is important because it can reduce the amount of data transferred per data unit (i.e., a bandwidth requirement reduction). An example of data representation within the scope of this work is compression, which increases the computation per bit—a key property to consider as computation increases faster than bandwidth to storage. Compression may lower image quality by introducing artifacts or blur. Record Layouts. Learning from data requires sampling points from a training set, which can cause small, random accesses that are detrimental to the performance of the storage device. Record layouts, such as TensorFlow’s TFRecords (TFRecords) or MXNet’s ImageRecord (ImageRecord), attempt to alleviate this problem by batching data points together to increase access locality. Batches of training data (i.e., dataset subsets) are then accessed together, amortizing delays in access time across multiple data points. These batches of data are called records. The key to any record layout is the serialization, which is the conversion of data structures into byte streams. Record designs have different performance properties (e.g., space or access time) when written to disk, as shown in Figure 1. Image Compression. Compressed forms are commonly used to represent training data. JPEG (Wallace, 1992) is one of the most popular formats for image compression and is used ubiquitously in machine learning (e.g., Deng et al., 2009; Russakovsky et al., 2015; Lin et al., 2014; Everingham et al., 2010). Most compression formats (including JPEG) only allow for the compression level, i.e., the trade-off between data size and fidelity, to be set at encoding time, which often results in choosing this level independent of the application. This can result in over-compression, which may negatively impact application convergence quality, or under-compression, which results in excess data size, and thus, slower storage system performance. Worse, deep learning pipelines often involve an application-defined post-processing step (e.g., data augmentation), which may further distort an image and obscure the relationship between image fidelity and model accuracy (Bishop, 1995; Karras et al., 2018; Dziugaite et al., 2016; Arnab et al., 2018). While setting encoding-time parameters is unavoidable, the ability to decompress data as it becomes available (i.e., dynamic compression) provides a means to avoid some of the bandwidth expenses of under-compression by simply terminating decompression once sufficient fidelity is reached. In Figure 2, we provide a high-level illustration of the JPEG algorithm, which can be customized to support dynamic compression. First, an image is split into blocks of size 8 × 8. Each block is converted into the frequency domain, such that frequency 0 is the average color of the block, and higher frequencies encode rapid changes in the block. The low frequencies, such as the average value of the block, store the bulk of the perceptually-relevant content in the image (e.g., knowing the block is mostly blue is more important than knowing a white wave is rippling through it). Quantization, which discards information from the block and results in compression, thus prioritizes discarding higher frequencies. The resulting quantized table is then serialized into a flat form. Since data is rendered on a screen from left to right, top to bottom, it makes sense to encode the data in this manner, which results in a sequential format1. Decoding the resulting data is simply a matter of inverting (albeit losslessly) the process that we just described. Progressive Image Compression. Progressive formats allow data to be read at varying degrees of compression without duplication. With the sequential case, data is ordered by blocks, and thus, partially reading the data results in “holes” in the image for unread blocks (Wallace, 1992). Dynamic compression ensures that all blocks get some information (deltas) before revising them (with more deltas). As progressive formats are simply a different traversal of the quantization matrix, with all else being equal, they contain the same information as sequential JPEG (JPEGTran LibJPEG). Progressive JPEG, combined with an additional rearrangement of data, forms the basis of the idea behind PCRs. In Figure 2, non-progressive formats serialize the image matrix in one pass, while progressive formats serialize the matrix in disjoint groups of deltas which are called scans. Scans are ordered by importance (e.g., the first few scans improve fidelity more than subsequent scans). Thus, any references to images generated from scan n will implicitly assume that the image decoder had access to all prior scans (i.e., {scan 1, scan 2, . . . , scan (n− 1)}). The bottom of Figure 2 shows how image fidelity improves from a single scan to utilizing all scans. 3 PROGRESSIVE COMPRESSED RECORDS In this section, we introduce a novel record format for machine learning training called Progressive Compressed Records (PCRs). PCRs are a combination of both layout and data representation. Efficient layouts guarantees that hardware is fully utilized (in terms of bandwidth), while efficient data representations can reduce the total amount of work that is required of the system. To this end, we introduce the concept of scan groups in Section 3.1, which leverage both layout and progressive compression to obtain dynamic compression, allowing both high performance reads while reducing the amount of data read. Using progressive compression, scan groups break images into deltas, which are then rearranged in order to facilitate reduced, yet sequential, data access. In Section 3.2, we discuss how PCRs are implemented, covering both creating PCRs (encoding) and reading them (decoding). The benefits of the PCR implementation boiling down to a bit shuffle are that: 1) PCRs are easy to implement, 2) they are fundamentally lossless, and 3) processing them is fast. As we demonstrate in Section 4, while PCRs can be implemented easily, they manifest in large speedups for a variety of scenarios. Further, PCRs can be generalized beyond images and JPEG. 3.1 SCAN GROUPS Scan groups are a collection of scans (deltas) of the same fidelity. Scan groups combine layout with progressive compression to allow reading subsets of the compressed data with high hardware efficiency. PCRs make the assumption that the entire training data will be read at the same fidelity. Using this assumption, scan groups rearrange the data such that all deltas of the same fidelity are grouped together. This, in turn, enables groups of deltas to be read together sequentially, which creates dynamicity in the decoding process. Since scans are sorted by importance, and scan groups are a set of scans, the scan groups are also sorted by importance. To paint a clear representation of how scan groups work, we point the reader to Figure 3. PCRs begin with some metadata which is assumed to be needed by all machine learning tasks, such as labels or 1“Sequential” refers to in-memory and should not be confused with sequential on-disk access. bounding boxes. In practice, metadata is small in size, and, thus, the space overheads are negligible. The metadata is followed by scan groups, which consist of scans. The scan 1 representation of the shark in Figure 2 will be available in its record once data is read up to offset 1. Likewise, the scan 3 representation will be available once the record is read up to offset 3, and the representation will be more crisp as 3 scans were used per image, rather than 1. Reading up to the end of the record yields the most complete representation of the image. As scan groups consist of groups of the same fidelity, every image contained in a record is available at the same fidelity at the same group offset. Users of PCRs can read data at a certain scan fidelity by simply reading the on-disk byte stream from the start of the PCR (i.e., offset 0) to the byte offset corresponding to the corresponding scan group. Partially reading the records results in bandwidth savings without re-encoding the data. 3.2 IMPLEMENTATION There are two fundamental PCR implementation details: the encoding process and the decoding process. The encoding process transforms a set of JPEG files into a directory, which contains 1) a database for PCR metadata and 2) at least one .pcr file. The decoding process, which takes the directory as input and yields a set of JPEG images, efficiently inverts a subset of the encoding. The dataset is split into many PCRs, and, thus, the training process is reading tens to hundreds of .pcr files per epoch. The data loader is where the PCR decoding library interfaces with the inputs provided to deep learning libraries (e.g., TensorFlow (Abadi et al., 2015), MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2017)). Below, we describe how each of these steps is done. Encoding. Given a set of images, the PCR encoder must break the images into scans, group the scans into scan groups, and sort the scan groups by fidelity. Once the groups are sorted, the PCR encoder can serialize the groups while taking note of their offsets (so that subsets may later be decoded). The metadata (e.g., labels) is prepended to the serialized representation, and the serialized representation is written to disk. We focus on grouping JPEG due to its generality, but PCRs can use any dataset-level progressive format. Images can be decomposed in both space and fidelity; other data modalities (e.g., video) may also have time. Our implementation uses JPEGTRAN (JPEGTran Man Page) to losslessly transform the set of JPEG images into a set of progressive JPEG images. With the default settings, each JPEG is broken up into 10 scans. The encoder scans the binary representation of the progressive JPEG files, searching for the markers that designate the end of a scan group. The encoder thus has access to all 10 offsets within the JPEG files that can be used to determine the boundaries between scan regions. Forming scan groups requires grouping the scan regions with the same fidelity together, which can be done in one pass over the set of images corresponding to that PCR. This grouping must be reversible, as the decoding process will un-group the scans to reconstruct the original images. This grouping can be done with existing serialization libraries. We use Protobuf (Protobuf) to serialize the groups as well as the labels. However, it is key that every group (and the metadata) be serialized as a separate message, as Protobuf can rearrange the contents within a message, and thus can rearrange the ordering of the groups themselves. We finally concatenate the contents of the messages and write them out as one file. As shown in Appendix A.5, any record format conversion can be expensive; PCRs benefit from requiring only a single conversion for multiple tasks. Decoding. To decode a PCR file, one has to first lookup the file’s scan group offsets in the database. The offsets provide sufficient information to do a partial read of the file (e.g., instead of reading the entire file, we read only enough bytes to read up to the desired scan group). Decoding the JPEGs requires inverting the PCR scan-group grouping process for the available scan-groups prior to JPEG decode. Since we are missing scan-groups, we terminate the byte stream with an End-of-Image (EOI) JPEG token—this technique allows most JPEG decoders to render the byte stream with only the available subset of scans. The bulk of the inverse conversion is done in 150 lines of C++ code. Loader. We implemented PCR loaders using PyTorch’s dataloader as well as DALI (NVIDIA, 2018)’s ExternalSource operator to return batches of images at a configurable fidelity (with the corresponding labels). We find that a pipeline abstraction simplifies loader design, since recordbased datasets can be easily iterated sequentially. In contrast, the PyTorch Dataloader abstrac- tion, which assumes that we can index randomly into an in-memory data structure (e.g., i = RandInt(0, n); (x, y) = data[i];), is harder to use for constantly fetching record formats off disk. Our implementation, while being only several hundred lines of code, obtains image rates that are competitive (e.g., faster/slower depending on number of scans) with the included DALI TFRecord loader, showing that PCRs can be implemented efficiently (i.e., fast enough to rarely bottleneck data loading) with a low amount of engineering effort. 4 EXPERIMENTS This section presents our evaluation of PCRs using a suite of large-scale image datasets. As large images are more taxing to a system’s network and storage, our evaluation focuses on datasets with high-resolution images. We describe our experimental setup in Section 4.1. We present our evaluation results in Section 4.2, showing that halving data bandwidth per image results in comparable accuracy but with half the training time. In Section 4.3, we analyze the intuitive relationship between objective measures of image fidelity and time-to-accuracy. Finally, in Section 4.4, we present results that trace the training time speedups to the data loading times themselves. 4.1 EVALUATION SETUP Our evaluation uses the ImageNet ILSVRC (Deng et al., 2009; Russakovsky et al., 2015), HAM10000 (Tschandl et al., 2018), Stanford Cars (Krause et al., 2013), and CelebA-HQ (Karras et al., 2018) datasets, which are described below. See Appendix A.4 for additional details. Datasets. • ImageNet-100 ImageNet provides a wide diversity of classes, of which we focus on the first 100 to make training times more tractable. Since classes are roughly ordered by ImageNet categories, this results in a fine-grained, i.e., hard to classify, multiclass task. We convert the dataset into PCRs in batches of 1024, which results in 126 PCRs. We use the full ImageNet dataset in Appendix A.7. • HAM10000 We split the HAM10000 dataset randomly 80%/20% between train and test. We convert the dataset into PCRs in batches of 64, which results in 125 PCRs of similar size as the ones used for ImageNet-100. • Stanford Cars The Stanford Cars dataset is another fine-grained classification dataset, since all images are cars, and there are 196 classes spread over 16k images. We believe this dataset highlights some of the worst-case training scenarios, as it is considerably easier to predict highly compressed variants of unrelated images by exploiting low frequency image statistics (e.g., planes vs. frogs). We explore a coarse-grained version of Cars in Appendix A.6. Cars has 63 PCRs. • CelebAHQ-Smile CelebA-HQ is a high-resolution derivative of the CelebA dataset (Liu et al., 2015), which consists of 30k celebrity faces at 10242. We use the annotations provided by CelebA to construct a smiling or not smiling dataset. We split the 30k dataset into 80%/20% train/test, and we convert the training set into 93 PCRs. All datasets utilize resizing, crop, and horizontal-flip augmentations, as is standard for ImageNet training. We provide examples of scan groups for these datasets in Appendix A.8. Training Regime. We use pretrained ImageNet weights for HAM10000 and Cars due to the limited amount of training data. We use standard ImageNet training, starting the learning rate at 0.1 (with gradual warmup (Goyal et al., 2017)) and dropping it on epoch 30 and 60 by 10×. After augmentations, all inputs are of size 224× 224. The pretrained experiments (HAM10000 and Cars) start at a learning rate of 0.01 to avoid changing the initialization too aggressively. We use fp16 training (Micikevicius et al., 2018) as it results in an additional 10% images per second (see Appendix A.3). We use a ResNet18 (He et al., 2016) and ShuffleNetv2 (Ma et al., 2018) architecture for our experiments with a batch size of 128 per each worker. We run each experiment at least 3 times to obtain confidence intervals given different random seeds and sources of non-determinism such as multi-threading and I/O. System Setup. We run distributed experiments on a 16-node Ceph (Weil et al., 2006) cluster connected with a Cisco Nexus 3264-Q 64-port QSFP+ 40GbE switch. Each node has a 16– core Intel E5–2698Bv3 Xeon 2GHz CPU, 64GiB RAM, NVIDIA TitanX, 4TB 7200RPM Seagate ST4000NM0023 HDD, and a Mellanox MCX314A-BCCT 40GbE NIC. All nodes run Linux kernel 4.15 on Ubuntu 18.04, CUDA10, and the Luminous release (v12.2.12) of Ceph. We use six of the nodes as Ceph nodes; five nodes are dedicated as storage nodes in the form of Object Storage Devices (OSDs), and one node is used as a Ceph metadata server (MDS). The remaining 10 nodes are used as machine learning workers for the training process. This means there is a 2:1 ratio between compute and storage nodes. We use PyTorch (Paszke et al., 2017) (v1.12) with NVIDIA Apex (Apex) (v0.1) and NVIDIA DALI (NVIDIA, 2018) (v0.14.0). We use at least four worker threads to prefetch data in the loader. While we focus on this particular distributed setting, we observe similar time-to-accuracy gains on a single machine with eight GPUs sharing the same disk, and we believe the results will generalize to different setups. 4.2 TIME TO ACCURACY The time-to-accuracy results for ResNet18 training are presented in Figure 4, while those of ShuffleNetv2 are presented in Figure 6. See Appendix A.2 for a tabular view and Appendix A.1 for the corresponding training loss results. All scan groups within a dataset were run for the same amount of epochs, so lower scan groups finish earlier. 90 epochs are shown for ImageNet, 150 epochs are shown for HAM10000, 250 epochs are shown for Stanford Cars, and 90 epochs are shown for CelebAHQ-Smile. We sample the test accuracy every 15 epochs for non-ImageNet datasets to reduce interference with training measurements. To avoid measuring implementation differences with other loaders, our evaluation focuses on the differences obtained by reading various amounts of scan groups. Reading all the data (up to scan group 10) is the baseline. First, we note that for all datasets, except for Cars, PCRs provide a 2× boost to time-to-accuracy compared to the baseline. The reason for this speedup is that lower scan groups are smaller. As shown in Figure 5, scan group 5 is roughly half the size of the baseline, and scan group 1 is a fifth of scan group 5 (i.e., a potential 10× bandwidth savings). This trend holds across datasets (see Appendix A.1). As we will discuss in Section 4.4, the space savings manifest in reduced dataloader latencies. Second, we note that there is an inherent trade-off between convergence quality and the speedup attained by using less storage resources. In general, although lower fidelity scan groups allow the system to operate more efficiently, they do so at the expense of model convergence. Scan group 1, the lowest fidelity scan, performs poorly, especially on Cars, where fine-grained details are important. Scan groups limit the maximum achievable accuracy on a task; if learning plateaus prematurely, applications should raise the scan group in a manner similar to dropping the learning rate. Third, the relative rankings of scan groups are relatively stable across models and datasets, which reduces tuning efforts in choosing the appropriate scan group. We further relate these rankings to the fidelity of the scan groups in Section 4.3. Our conclusion is that, for most datasets, scan group 5 costs half as much in terms of bandwidth, but reaches the same level of test accuracy as the baseline—thus, it is a good default. This is most apparent for ImageNet and HAM10000, which are challenging enough for small variations in image fidelity to make a commensurate difference in test accuracy. In contrast, Cars is too fine-grained to allow images to be degraded, and CelebAHQ-Smile is too coarse-grained for image degradation to matter. 4.3 THE RELATIONSHIP BETWEEN IMAGE FIDELITY AND TEST ACCURACY We use MSSIM (Wang et al., 2003), a standard measure of image similarity, to compare how various scans approximate the reference image, and we show the results in Figure 7. We find that there is a strong connection between MSSIM and the resulting final test accuracy, especially when comparing scan groups within a task. Our preliminary tests demonstrate that scan groups that have very similar MSSIM perform very similarly, which is why only groups 1, 2, 5, and the baseline are shown. Due to the way progressive JPEG is coded by default, groups tend to cluster (e.g., 2, 3, and 4 are usually similar, while 5 introduces a difference). We note that MSSIM being poor (e.g., scan group 1 for cars) or MSSIM being close to baseline (e.g., scan group 5 for HAM10000) are good predictors of relative test accuracy within tasks. MSSIM can therefore be used as a diagnostic for choosing scans. 4.4 THE RELATIONSHIP BETWEEN SCANS AND DATA STALLS The datasets we evaluated show that data loading can slow down the training process. To highlight these slowdowns, and the improvements PCRs achieve by not using all scan groups, we present the loading time of data for the ResNet18 ImageNet-100 run in Figure 8. We obtain similar results for the other datasets. The baseline of using all scan group results in high periodic loading stalls, where the prefetching queue was drained. Upon blocking, training cannot proceed until the worker threads obtain a full batch of data. Periods of (mostly) no stalls are caused by both threads pre-fetching the data and single records servicing multiple minibatches. Using fewer scan groups reduces the amount of data read, which results in lower magnitude stalls. We observe these stalls with both DALI and PyTorch loaders. 5 RELATED WORK Training Over Large Datasets. Training with massive amounts of parallelism (thus stressing system bandwidth) while achieving near-linear speedup has been the focus of previous work, and it highlights a practical need for efficient data pipelines at scale. A common objective is training mod- els over ImageNet in a record amount of time (Goyal et al., 2017; You et al., 2018; Jia et al., 2018; Ying et al., 2018; Yamazaki et al., 2019). This line of work, while demonstrating immense bandwidth needs, typically keeps data in memory, avoiding storage problems altogether. Recently, the high performance computing community has become interested in training models at massive scale (27k GPUs) (Kurth et al., 2018). Since each GPU matches a disk in bandwidth, the dataset was partitioned among the local memory/storage of the nodes, avoiding the distributed filesystem. Our work attempts to reduce the storage bottleneck altogether, such that anything from a couple disks to a distributed file system could service many GPUs. A separate line of work shows that I/O is a significant bottleneck for certain tasks and proposes optimizing I/O via a set of deep-learning specific optimization to LMDB (Pumma et al., 2019). In contrast, our focus is more on data representation, which is independent of the internals of the storage system. Production systems such as TFX (Baylor et al., 2017) have used custom Protobuf parsers to get 2–5× speedups for simple (e.g., linear) models; these techniques are complementary to ours and reduce loader computational overheads. Dataset Reduction Techniques. The availability of larger datasets has spawned interest in learning algorithms that guaranteed both “good” model accuracy and lower computational complexity. Data reduction techniques, such as sketching, coresets, clustering, and sampling, have been used to reduce the size of a training set (Karnin & Liberty, 2019; Feldman et al., 2013; Liberty, 2013; Woodruff, 2014; Daniely et al., 2017; Kabkab et al., 2016; Bachem et al., 2017). A different approach is to use the unaltered training set, but reduce the size of the active training set to reduce bandwidth requirements (Matsushima et al., 2012). In contrast, we modify the data representation and layout to be more efficient across a wide variety of models. Compression. Finally, the reduction of data size via compression methods is ubiquitous across computer systems. To avoid costly model transmission/storage, prior work compressed neural network models (Han et al., 2016b;a; 2015; Cheng et al., 2017; Xu et al., 2018; Hwang & Sung, 2014; Anwar et al., 2015; Denton et al., 2014). Similarly, dataset distillation (Wang et al., 2018) compresses a model’s parameters into a few training examples. Our work attempts to compress data for training, and not the network itself. Prior work has looked into optimizing training systems by compressing neural network training network traffic (Lim et al., 2019; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). This trend is not specific to machine learning; prior work in databases, computer memories, and the web used compression to reduce system bandwidth requirements (Zukowski et al., 2006; Abadi et al., 2006; Pekhimenko et al., 2018; 2012; Yan et al., 2017; Agababov et al., 2015). Our work focuses on bandwidth for ML data pipelines by utilizing the compression robustness found in most models. Other work modifies models to be able to directly train on compressed representations for the purpose of avoiding decoding or reducing model complexity (Gueguen et al., 2018; Torfason et al., 2018; Fu & Guimaraes, 2016; Ulicny & Dahyot, 2017). Our work differs in motivation, as we do not focus on model computation or make modifications to the models. Previous work has investigated how image degradation (e.g., JPEG artifacts) affect inference (Dodge & Karam, 2016; Vasiljevic et al., 2016; Peng et al., 2016; Zheng et al., 2016); in contrast, our work focuses on the effects of compression on training. 6 CONCLUSION To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems. Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats. We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy. PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches. PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically. While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation. A APPENDIX A.1 LOSS, SPACE SAVINGS, AND ACCURACY PER EPOCH Below, we provide additional experiment plots that were omitted in the main text. Figure 9 and Figure 10 contain the loss over time for the ResNet-18 and ShuffleNetv2 experiments shown in Section 4. Figure 11 extends Figure 5 to show the scan sizes for all datasets. It is worth noting that Top-5 accuracies mirror the Top-1 accuracies trends for ImageNet and Cars. To measure the effect of compression without accounting for time, we show accuracy vs. epoch plots in Figure 12 and Figure 13. While compression can itself be viewed as a data augmentation (e.g., removing high frequency features that can possibly cause overfitting), we notice that it does not usually improve accuracy. Rather, most of the gains in time-to-accuracy are from faster image rates. 0 2000 4000 6000 8000 Time (s) 100 6 × 10 1 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 1600 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 9: Training loss with ResNet-18. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1000 2000 3000 4000 5000 6000 7000 8000 Time (s) 100 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 10: Training loss with ShuffleNetv2. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Si ze (B yt es ) 1e8 (a) ImageNet-100 0 1 2 3 4 5 6 7 8 9 10 Scan 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Si ze (B yt es ) 1e7 (b) HAM10000 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (c) Stanford Cars 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (d) CelebAHQ-Smile Figure 11: The size in bytes of various levels of scans read. Scan group 0 is shown, which contains only labels and is typically ∼100 bytes. Each scan adds roughly a constant amount of data (i.e., linear scaling), although certain scans add considerably more than others (i.e., sizes sometimes cluster) due to techniques like chroma subsampling. Using all 10 scans can require over an order of magnitude more bandwidth than 1–2 scans. A.2 TIME TO CONVERGENCE TABLE We provide a table of time-to-accuracy in Table 1 to help with reading Figure 4 and Figure 6. For Stanford Cars, low numbers of scans do reach accuracies faster than the baseline, but there is a noticeable drop in accuracy. This issue of achieving comparable accuracy for the Cars dataset is further explored in Appendix A.6. A.3 EXPERIMENT SETUP Below we describe details of how the experiments were run, such as hardware characteristics and software configurations. Benchmark Cluster Speeds. As noted in the main text, we utilize a NVIDIA TitanX Graphics Processing Unit (GPU) on each node for the model training. This GPU allows us to train (with FP32/FP16) ResNet-18 at 405/445 images per second and ShuffleNetv2 at 760/750 images per second. With a cached, decoded dataset of 224 × 224 resolution images, we achieve a clusterwide 3625/4050 images per second for ResNet-18 and 6975/7075 images per second for ShuffleNetv2. ImageNet images are around 110kB on average; with 10 GPUs, the cluster can consume 445 megabytes/s (ResNet-18) and 775 megabytes/s (ShuffleNetv2) of storage system bandwidth. GPUs continue to get faster over time, and faster GPUs (or other accelerators) have higher I/O bandwidth demands. Decoding Overhead. Progressive compression has some computational overhead associated with decompression compared to baseline formats. This overhead can grow with the number of scans, and, thus, users of PCRs may be concerned about the trade-offs between decoding overheads and bandwidth savings. First, we note that PCRs can use a large number of scans (e.g., hundreds), but, in practice, useful behavior is observed using only 10 scans (of which we only use 4). Second, the decoding overhead is often a favorable trade-off compared to a storage bottleneck, if one exists. To test this, we a Python microbenchmark that stores a subset of ImageNet data in memory and uses the PIL and OpenCV libraries for decoding. For PIL, we process 230 baseline images per second and 150 progressive images per second. For OpenCV, we process 225 baseline images per second and 165 progressive images per second. Thus, progressive compression with 10 scans adds only around 40–50% computational expense over baseline formats for common implementations. This speed, combined with additional optimizations such as multi-core parallelism (e.g., we would expect 4× these rates with 4 cores), suggests that while decoding can be an issue, the penalty from using progressive images can be managed more easily than a storage bottleneck (i.e., compute can usually be traded for storage bandwidth). Further, some of the decoding can actually be moved to an accelerator, like the GPU used for training, something which is already available via nvJPEG2. Reducing this computational expense by optimizing the implementation or reducing the amount of scans (since our experiments only use 4 distinct scans) is left as future work. Image Loading Rates. We provide image loading rates observed during training in Table 2. Using more scans slows down training significantly, as can be seen in the image rates. It is worth noting that these rates vary considerably during runtime (due to stalls), and ShuffleNetv2 is capable of a higher maximum training rate than ResNet-18. Further, as the number of scans is reduced, image rates approach the maximum achievable by the cluster for each model. A.4 DATASET DETAILS Below we describe the characteristics of the used datasets. ImageNet-100 Creation. The ImageNet-100 dataset was constructed by subsampling 100 classes out of the 1000 classes found in the ImageNet ILSVRC dataset (Deng et al., 2009; Russakovsky et al., 2015). These classes were chosen arbitrarily to limit computation time—they are the first 100 classes of ImageNet in sorted directory listing form i.e., n01440764–n01855672. CelebAHQ-Smile Creation. The CelebAHQ dataset (Karras et al., 2018) was created as a high quality version of the CelebA dataset (Liu et al., 2015). CelebA contains attributes for each face, such as whether the face is smiling or not. CelebAHQ-Smile utilizes these attributes to construct a dataset of 30k faces, where each face is assigned a binary variable for smiling or not. While the CelebA dataset was subsampled to construct CelebAHQ, we do not subsample CelebAHQ further (i.e., we use all 30k images it contains). Record and Image Quality Details. We provide the dataset size details for the encoded datasets in Table 3. As the original (e.g., lossless) images are hard to find, we estimate the JPEQ qual- 2https://developer.nvidia.com/nvjpeg ity setting of the training set with ImageMagick using identify -format ’%Q’. The JPEG quality setting determines the level of frequency quantization outlined in Figure 2. Intuitively, one would expect that higher quality JPEG images could allow more aggressive PCR compression rates for a fixed resolution, since each image has more redundant information on average. ImageNet and HAM10000 both have high quality images. CelebAHQ has lower quality images, but they are downscaled to 256×256 for training purposes, which increases the information density in the image (e.g., blurry images can be made to appear less blurry by downsampling), a fact exploited in prior work (Yan et al., 2017). Cars is neither high JPEG quality or large resolution. Under-compressing images (perhaps at high resolution) during the initial JPEG compression may allow for a larger range of viable scan groups. A.5 RECORD FORMAT CONVERSION TIMES We provide bandwidth-optimized record baselines in Figure 14, where we re-encode the images using a statically-chosen level of compression. These baselines re-encode the images with 50% quality and 90% JPEG quality, respectively, to reduce dataset size at a fixed level of fidelity. It is worth noting that re-encoding images compounds with the original JPEG compression, so the re-encoded image quality may be lower than 50% or 90% quality compared to the images in their original lossless form. This is in contrast to PCRs, which losslessly convert the images into a progressive format, which allows dynamic access to the level of fidelity. We observe that both the baseline method of dataset bandwidth reduction and the PCR method can take considerable encoding time, since the encoding time scales proportionally to the dataset size. We also observe that the PCR method is competitive (1.15× to 2.98×) to that of the baseline in terms of encoding time. PCRs avoid having to re-encode a dataset at multiple fidelity levels, and, therefore, they can save both storage space and encoding time. Converting the full ImageNet into record format takes roughly 16× longer than the 6 minutes needed for the 10× smaller subsampled dataset—the PCR conversion is 96 minutes (53 minutes are spent in JPEG conversion). One reason for this additional slowdown is that any system caches (e.g., in the distributed filesystem or the file cache on the converter node) are less likely to see a cache hit due to the working set size being larger. Although the exact conversion times are dependent on implementation, hardware, and the dataset, conversions times can be in the range of one hour of compute time per 100 GB. A.6 COARSE GRAINED VS. FINE GRAINED CARS EXPERIMENTS We provide experiments validating that compression needs vary within the same dataset for different tasks in Figure 15 and Figure 16, which show accuracy and loss, respectively. This experiment simply coarsens the granularity of the classification task, and demonstrates that lower scan groups can be used for tasks which are easier. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. We can see that compared to the original task, the coarse tasks reduce the gap between scan groups, and the binary task closes the gap even more. This suggests that as the tasks get easier, the tolerance for lower scan groups grows. Simply re-assigning the class labels to a coarser class reduces the complexity of the task and closes the accuracy gap across scan groups. A fixed PCR record encoding (i.e., without re-encoding) can support multiple tasks at the optimal quality, whereas static approaches may need one encoding per task. Some training methodologies, such as Progressive GAN training (Karras et al., 2018), utilize different dataset qualities over the course of training (e.g., training with a course-to-fine quality schedule), and, thus, a single training session may consume dozens of distinct dataset qualities. A.7 IMAGENET-1000 RESULTS We provide the full ImageNet (i.e., 1000 classes) results with ResNet-18 and ShuffeNetv2 in Figure 17. Only group 5 and the baseline are shown, since lower group numbers have difficulty achieving baseline accuracy parity. The results show that PCRs can speed up training by a factor of 2 while retaining accuracy even with large scale (i.e., over 1 million samples) training. 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Is-Corvette 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (e) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (f) Is-Corvette Figure 16: Training loss with ResNet-18 (top) and ShuffleNetv2 (bottom) on a coarser version of the Stanford Cars dataset. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. The gap between scan groups closes as the task is made more simple. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. A.8 IMAGE EXAMPLES BY SCAN We provide image examples from each dataset that illustrate each scan group in Figure 18. Reading more scans, and, thus, data, from a progressive image results in higher fidelity images, but there are diminishing returns. Images can use a remarkably low amount of scan groups without impacting visual quality, which manifests in bandwidth savings if used accordingly. 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (a) ResNet-18 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 70 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (b) ResNet-18 Test Accuracy 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (c) ShuffleNetv2 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (d) ShuffleNetv2 Test Accuracy Figure 17: Training loss and test accuracy with ResNet-18 (top) and ShuffleNetv2 (bottom) on the 1000 class ImageNet Dataset. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown.
1. What is the focus of the paper regarding progressive compression for deep neural networks? 2. What are the concerns regarding the setting and assumptions made in the paper? 3. How does the proposed approach compare with other techniques optimizing disk I/O in deep learning training? 4. What are the strengths and potential applications of the proposed method? 5. Are there any clarity issues or typos in the paper that need attention?
Review
Review The paper demonstrates an interesting application of progressive compression to reduce the disk I/O overhead of training deep neural networks. The format encodes the trade-off between data fidelity and I/O bandwidth demand naturally, which could be useful when I/O is the bottleneck. My major concern is that the paper should be clearer about the setting. * Does your work target the case where data cannot fit in RAM and should be fetched from local disk or through network? However, the datasets used in the evaluation look small and could fit in RAM. * How are mini-batches created? You mentioned in the related work that previous work (Kurth et al., 2018) lets each worker sample from a local subset instead of performing a true sampling of the whole dataset. Does your work perform true sampling? How much benefit does it give? * Is disk I/O really a bottleneck in training? There are many evidence [1][2][3] of almost linear scalability in training ResNet on *full* imagenet across hundreds or even thousands of GPUs. These work focus heavily on network communication rather than disk I/O. Does your setting differ from theirs? How does your approach compare with their techniques for optimizing disk I/O? That being said, I think this approach should be appealing when the I/O bandwidth is limited and dynamic. Examples include training on edge devices, or federated training where data needs be fetched via ad-hoc network. Other detailed comments: * Figure 1 is not very informative and quite puzzling. There is no definition of quality at that point. * Sec 2 paragraph 3. What is the issue of data augmentation with the standard JPEG compression? Does your compression ease data augmentation? * Sec 3.1 paragraph 1. "This is turn enables ..." -> "This in turn enables ..." * How to decide the number of scans? Does it have impact on the I/O efficiency? * Evaluation - I'm not familiar with Ceph. Why choose this particular environment? Does it bring in extra overhead (e.g., communicating with metadata server). What does the network topology look like? Is the data loading stall (figure 7) due to network congestion? - It worth evaluating more tasks such as detection and segmentation to measure the impact of compression. [1] Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, Goyal et al. [2] Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash, Mikami et al. [3] Image Classification at Supercomputer Scale, Ying et al.
ICLR
Title Progressive Compressed Records: Taking a Byte Out of Deep Learning Data Abstract Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. N/A Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. 1 INTRODUCTION Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network). A plethora of work has investigated scaling deep learning from a compute- or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Zhu et al., 2018; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced. Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018). For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018), which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012). This, combined with the memory wall—a lack of bandwidth between compute and memory—suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010). The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes. In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets. Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity. PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application’s needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels. Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level. As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy. Overall, we make the following contributions: 1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks. 2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data. PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth. 3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression. This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance. 2 BACKGROUND Two complementary concepts make up the process of storing training data: the layout of the data on the storage medium and the representation of the data. Data layout is important because it can help fully utilize the bandwidth potential of the underlying storage system. Data representation is important because it can reduce the amount of data transferred per data unit (i.e., a bandwidth requirement reduction). An example of data representation within the scope of this work is compression, which increases the computation per bit—a key property to consider as computation increases faster than bandwidth to storage. Compression may lower image quality by introducing artifacts or blur. Record Layouts. Learning from data requires sampling points from a training set, which can cause small, random accesses that are detrimental to the performance of the storage device. Record layouts, such as TensorFlow’s TFRecords (TFRecords) or MXNet’s ImageRecord (ImageRecord), attempt to alleviate this problem by batching data points together to increase access locality. Batches of training data (i.e., dataset subsets) are then accessed together, amortizing delays in access time across multiple data points. These batches of data are called records. The key to any record layout is the serialization, which is the conversion of data structures into byte streams. Record designs have different performance properties (e.g., space or access time) when written to disk, as shown in Figure 1. Image Compression. Compressed forms are commonly used to represent training data. JPEG (Wallace, 1992) is one of the most popular formats for image compression and is used ubiquitously in machine learning (e.g., Deng et al., 2009; Russakovsky et al., 2015; Lin et al., 2014; Everingham et al., 2010). Most compression formats (including JPEG) only allow for the compression level, i.e., the trade-off between data size and fidelity, to be set at encoding time, which often results in choosing this level independent of the application. This can result in over-compression, which may negatively impact application convergence quality, or under-compression, which results in excess data size, and thus, slower storage system performance. Worse, deep learning pipelines often involve an application-defined post-processing step (e.g., data augmentation), which may further distort an image and obscure the relationship between image fidelity and model accuracy (Bishop, 1995; Karras et al., 2018; Dziugaite et al., 2016; Arnab et al., 2018). While setting encoding-time parameters is unavoidable, the ability to decompress data as it becomes available (i.e., dynamic compression) provides a means to avoid some of the bandwidth expenses of under-compression by simply terminating decompression once sufficient fidelity is reached. In Figure 2, we provide a high-level illustration of the JPEG algorithm, which can be customized to support dynamic compression. First, an image is split into blocks of size 8 × 8. Each block is converted into the frequency domain, such that frequency 0 is the average color of the block, and higher frequencies encode rapid changes in the block. The low frequencies, such as the average value of the block, store the bulk of the perceptually-relevant content in the image (e.g., knowing the block is mostly blue is more important than knowing a white wave is rippling through it). Quantization, which discards information from the block and results in compression, thus prioritizes discarding higher frequencies. The resulting quantized table is then serialized into a flat form. Since data is rendered on a screen from left to right, top to bottom, it makes sense to encode the data in this manner, which results in a sequential format1. Decoding the resulting data is simply a matter of inverting (albeit losslessly) the process that we just described. Progressive Image Compression. Progressive formats allow data to be read at varying degrees of compression without duplication. With the sequential case, data is ordered by blocks, and thus, partially reading the data results in “holes” in the image for unread blocks (Wallace, 1992). Dynamic compression ensures that all blocks get some information (deltas) before revising them (with more deltas). As progressive formats are simply a different traversal of the quantization matrix, with all else being equal, they contain the same information as sequential JPEG (JPEGTran LibJPEG). Progressive JPEG, combined with an additional rearrangement of data, forms the basis of the idea behind PCRs. In Figure 2, non-progressive formats serialize the image matrix in one pass, while progressive formats serialize the matrix in disjoint groups of deltas which are called scans. Scans are ordered by importance (e.g., the first few scans improve fidelity more than subsequent scans). Thus, any references to images generated from scan n will implicitly assume that the image decoder had access to all prior scans (i.e., {scan 1, scan 2, . . . , scan (n− 1)}). The bottom of Figure 2 shows how image fidelity improves from a single scan to utilizing all scans. 3 PROGRESSIVE COMPRESSED RECORDS In this section, we introduce a novel record format for machine learning training called Progressive Compressed Records (PCRs). PCRs are a combination of both layout and data representation. Efficient layouts guarantees that hardware is fully utilized (in terms of bandwidth), while efficient data representations can reduce the total amount of work that is required of the system. To this end, we introduce the concept of scan groups in Section 3.1, which leverage both layout and progressive compression to obtain dynamic compression, allowing both high performance reads while reducing the amount of data read. Using progressive compression, scan groups break images into deltas, which are then rearranged in order to facilitate reduced, yet sequential, data access. In Section 3.2, we discuss how PCRs are implemented, covering both creating PCRs (encoding) and reading them (decoding). The benefits of the PCR implementation boiling down to a bit shuffle are that: 1) PCRs are easy to implement, 2) they are fundamentally lossless, and 3) processing them is fast. As we demonstrate in Section 4, while PCRs can be implemented easily, they manifest in large speedups for a variety of scenarios. Further, PCRs can be generalized beyond images and JPEG. 3.1 SCAN GROUPS Scan groups are a collection of scans (deltas) of the same fidelity. Scan groups combine layout with progressive compression to allow reading subsets of the compressed data with high hardware efficiency. PCRs make the assumption that the entire training data will be read at the same fidelity. Using this assumption, scan groups rearrange the data such that all deltas of the same fidelity are grouped together. This, in turn, enables groups of deltas to be read together sequentially, which creates dynamicity in the decoding process. Since scans are sorted by importance, and scan groups are a set of scans, the scan groups are also sorted by importance. To paint a clear representation of how scan groups work, we point the reader to Figure 3. PCRs begin with some metadata which is assumed to be needed by all machine learning tasks, such as labels or 1“Sequential” refers to in-memory and should not be confused with sequential on-disk access. bounding boxes. In practice, metadata is small in size, and, thus, the space overheads are negligible. The metadata is followed by scan groups, which consist of scans. The scan 1 representation of the shark in Figure 2 will be available in its record once data is read up to offset 1. Likewise, the scan 3 representation will be available once the record is read up to offset 3, and the representation will be more crisp as 3 scans were used per image, rather than 1. Reading up to the end of the record yields the most complete representation of the image. As scan groups consist of groups of the same fidelity, every image contained in a record is available at the same fidelity at the same group offset. Users of PCRs can read data at a certain scan fidelity by simply reading the on-disk byte stream from the start of the PCR (i.e., offset 0) to the byte offset corresponding to the corresponding scan group. Partially reading the records results in bandwidth savings without re-encoding the data. 3.2 IMPLEMENTATION There are two fundamental PCR implementation details: the encoding process and the decoding process. The encoding process transforms a set of JPEG files into a directory, which contains 1) a database for PCR metadata and 2) at least one .pcr file. The decoding process, which takes the directory as input and yields a set of JPEG images, efficiently inverts a subset of the encoding. The dataset is split into many PCRs, and, thus, the training process is reading tens to hundreds of .pcr files per epoch. The data loader is where the PCR decoding library interfaces with the inputs provided to deep learning libraries (e.g., TensorFlow (Abadi et al., 2015), MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2017)). Below, we describe how each of these steps is done. Encoding. Given a set of images, the PCR encoder must break the images into scans, group the scans into scan groups, and sort the scan groups by fidelity. Once the groups are sorted, the PCR encoder can serialize the groups while taking note of their offsets (so that subsets may later be decoded). The metadata (e.g., labels) is prepended to the serialized representation, and the serialized representation is written to disk. We focus on grouping JPEG due to its generality, but PCRs can use any dataset-level progressive format. Images can be decomposed in both space and fidelity; other data modalities (e.g., video) may also have time. Our implementation uses JPEGTRAN (JPEGTran Man Page) to losslessly transform the set of JPEG images into a set of progressive JPEG images. With the default settings, each JPEG is broken up into 10 scans. The encoder scans the binary representation of the progressive JPEG files, searching for the markers that designate the end of a scan group. The encoder thus has access to all 10 offsets within the JPEG files that can be used to determine the boundaries between scan regions. Forming scan groups requires grouping the scan regions with the same fidelity together, which can be done in one pass over the set of images corresponding to that PCR. This grouping must be reversible, as the decoding process will un-group the scans to reconstruct the original images. This grouping can be done with existing serialization libraries. We use Protobuf (Protobuf) to serialize the groups as well as the labels. However, it is key that every group (and the metadata) be serialized as a separate message, as Protobuf can rearrange the contents within a message, and thus can rearrange the ordering of the groups themselves. We finally concatenate the contents of the messages and write them out as one file. As shown in Appendix A.5, any record format conversion can be expensive; PCRs benefit from requiring only a single conversion for multiple tasks. Decoding. To decode a PCR file, one has to first lookup the file’s scan group offsets in the database. The offsets provide sufficient information to do a partial read of the file (e.g., instead of reading the entire file, we read only enough bytes to read up to the desired scan group). Decoding the JPEGs requires inverting the PCR scan-group grouping process for the available scan-groups prior to JPEG decode. Since we are missing scan-groups, we terminate the byte stream with an End-of-Image (EOI) JPEG token—this technique allows most JPEG decoders to render the byte stream with only the available subset of scans. The bulk of the inverse conversion is done in 150 lines of C++ code. Loader. We implemented PCR loaders using PyTorch’s dataloader as well as DALI (NVIDIA, 2018)’s ExternalSource operator to return batches of images at a configurable fidelity (with the corresponding labels). We find that a pipeline abstraction simplifies loader design, since recordbased datasets can be easily iterated sequentially. In contrast, the PyTorch Dataloader abstrac- tion, which assumes that we can index randomly into an in-memory data structure (e.g., i = RandInt(0, n); (x, y) = data[i];), is harder to use for constantly fetching record formats off disk. Our implementation, while being only several hundred lines of code, obtains image rates that are competitive (e.g., faster/slower depending on number of scans) with the included DALI TFRecord loader, showing that PCRs can be implemented efficiently (i.e., fast enough to rarely bottleneck data loading) with a low amount of engineering effort. 4 EXPERIMENTS This section presents our evaluation of PCRs using a suite of large-scale image datasets. As large images are more taxing to a system’s network and storage, our evaluation focuses on datasets with high-resolution images. We describe our experimental setup in Section 4.1. We present our evaluation results in Section 4.2, showing that halving data bandwidth per image results in comparable accuracy but with half the training time. In Section 4.3, we analyze the intuitive relationship between objective measures of image fidelity and time-to-accuracy. Finally, in Section 4.4, we present results that trace the training time speedups to the data loading times themselves. 4.1 EVALUATION SETUP Our evaluation uses the ImageNet ILSVRC (Deng et al., 2009; Russakovsky et al., 2015), HAM10000 (Tschandl et al., 2018), Stanford Cars (Krause et al., 2013), and CelebA-HQ (Karras et al., 2018) datasets, which are described below. See Appendix A.4 for additional details. Datasets. • ImageNet-100 ImageNet provides a wide diversity of classes, of which we focus on the first 100 to make training times more tractable. Since classes are roughly ordered by ImageNet categories, this results in a fine-grained, i.e., hard to classify, multiclass task. We convert the dataset into PCRs in batches of 1024, which results in 126 PCRs. We use the full ImageNet dataset in Appendix A.7. • HAM10000 We split the HAM10000 dataset randomly 80%/20% between train and test. We convert the dataset into PCRs in batches of 64, which results in 125 PCRs of similar size as the ones used for ImageNet-100. • Stanford Cars The Stanford Cars dataset is another fine-grained classification dataset, since all images are cars, and there are 196 classes spread over 16k images. We believe this dataset highlights some of the worst-case training scenarios, as it is considerably easier to predict highly compressed variants of unrelated images by exploiting low frequency image statistics (e.g., planes vs. frogs). We explore a coarse-grained version of Cars in Appendix A.6. Cars has 63 PCRs. • CelebAHQ-Smile CelebA-HQ is a high-resolution derivative of the CelebA dataset (Liu et al., 2015), which consists of 30k celebrity faces at 10242. We use the annotations provided by CelebA to construct a smiling or not smiling dataset. We split the 30k dataset into 80%/20% train/test, and we convert the training set into 93 PCRs. All datasets utilize resizing, crop, and horizontal-flip augmentations, as is standard for ImageNet training. We provide examples of scan groups for these datasets in Appendix A.8. Training Regime. We use pretrained ImageNet weights for HAM10000 and Cars due to the limited amount of training data. We use standard ImageNet training, starting the learning rate at 0.1 (with gradual warmup (Goyal et al., 2017)) and dropping it on epoch 30 and 60 by 10×. After augmentations, all inputs are of size 224× 224. The pretrained experiments (HAM10000 and Cars) start at a learning rate of 0.01 to avoid changing the initialization too aggressively. We use fp16 training (Micikevicius et al., 2018) as it results in an additional 10% images per second (see Appendix A.3). We use a ResNet18 (He et al., 2016) and ShuffleNetv2 (Ma et al., 2018) architecture for our experiments with a batch size of 128 per each worker. We run each experiment at least 3 times to obtain confidence intervals given different random seeds and sources of non-determinism such as multi-threading and I/O. System Setup. We run distributed experiments on a 16-node Ceph (Weil et al., 2006) cluster connected with a Cisco Nexus 3264-Q 64-port QSFP+ 40GbE switch. Each node has a 16– core Intel E5–2698Bv3 Xeon 2GHz CPU, 64GiB RAM, NVIDIA TitanX, 4TB 7200RPM Seagate ST4000NM0023 HDD, and a Mellanox MCX314A-BCCT 40GbE NIC. All nodes run Linux kernel 4.15 on Ubuntu 18.04, CUDA10, and the Luminous release (v12.2.12) of Ceph. We use six of the nodes as Ceph nodes; five nodes are dedicated as storage nodes in the form of Object Storage Devices (OSDs), and one node is used as a Ceph metadata server (MDS). The remaining 10 nodes are used as machine learning workers for the training process. This means there is a 2:1 ratio between compute and storage nodes. We use PyTorch (Paszke et al., 2017) (v1.12) with NVIDIA Apex (Apex) (v0.1) and NVIDIA DALI (NVIDIA, 2018) (v0.14.0). We use at least four worker threads to prefetch data in the loader. While we focus on this particular distributed setting, we observe similar time-to-accuracy gains on a single machine with eight GPUs sharing the same disk, and we believe the results will generalize to different setups. 4.2 TIME TO ACCURACY The time-to-accuracy results for ResNet18 training are presented in Figure 4, while those of ShuffleNetv2 are presented in Figure 6. See Appendix A.2 for a tabular view and Appendix A.1 for the corresponding training loss results. All scan groups within a dataset were run for the same amount of epochs, so lower scan groups finish earlier. 90 epochs are shown for ImageNet, 150 epochs are shown for HAM10000, 250 epochs are shown for Stanford Cars, and 90 epochs are shown for CelebAHQ-Smile. We sample the test accuracy every 15 epochs for non-ImageNet datasets to reduce interference with training measurements. To avoid measuring implementation differences with other loaders, our evaluation focuses on the differences obtained by reading various amounts of scan groups. Reading all the data (up to scan group 10) is the baseline. First, we note that for all datasets, except for Cars, PCRs provide a 2× boost to time-to-accuracy compared to the baseline. The reason for this speedup is that lower scan groups are smaller. As shown in Figure 5, scan group 5 is roughly half the size of the baseline, and scan group 1 is a fifth of scan group 5 (i.e., a potential 10× bandwidth savings). This trend holds across datasets (see Appendix A.1). As we will discuss in Section 4.4, the space savings manifest in reduced dataloader latencies. Second, we note that there is an inherent trade-off between convergence quality and the speedup attained by using less storage resources. In general, although lower fidelity scan groups allow the system to operate more efficiently, they do so at the expense of model convergence. Scan group 1, the lowest fidelity scan, performs poorly, especially on Cars, where fine-grained details are important. Scan groups limit the maximum achievable accuracy on a task; if learning plateaus prematurely, applications should raise the scan group in a manner similar to dropping the learning rate. Third, the relative rankings of scan groups are relatively stable across models and datasets, which reduces tuning efforts in choosing the appropriate scan group. We further relate these rankings to the fidelity of the scan groups in Section 4.3. Our conclusion is that, for most datasets, scan group 5 costs half as much in terms of bandwidth, but reaches the same level of test accuracy as the baseline—thus, it is a good default. This is most apparent for ImageNet and HAM10000, which are challenging enough for small variations in image fidelity to make a commensurate difference in test accuracy. In contrast, Cars is too fine-grained to allow images to be degraded, and CelebAHQ-Smile is too coarse-grained for image degradation to matter. 4.3 THE RELATIONSHIP BETWEEN IMAGE FIDELITY AND TEST ACCURACY We use MSSIM (Wang et al., 2003), a standard measure of image similarity, to compare how various scans approximate the reference image, and we show the results in Figure 7. We find that there is a strong connection between MSSIM and the resulting final test accuracy, especially when comparing scan groups within a task. Our preliminary tests demonstrate that scan groups that have very similar MSSIM perform very similarly, which is why only groups 1, 2, 5, and the baseline are shown. Due to the way progressive JPEG is coded by default, groups tend to cluster (e.g., 2, 3, and 4 are usually similar, while 5 introduces a difference). We note that MSSIM being poor (e.g., scan group 1 for cars) or MSSIM being close to baseline (e.g., scan group 5 for HAM10000) are good predictors of relative test accuracy within tasks. MSSIM can therefore be used as a diagnostic for choosing scans. 4.4 THE RELATIONSHIP BETWEEN SCANS AND DATA STALLS The datasets we evaluated show that data loading can slow down the training process. To highlight these slowdowns, and the improvements PCRs achieve by not using all scan groups, we present the loading time of data for the ResNet18 ImageNet-100 run in Figure 8. We obtain similar results for the other datasets. The baseline of using all scan group results in high periodic loading stalls, where the prefetching queue was drained. Upon blocking, training cannot proceed until the worker threads obtain a full batch of data. Periods of (mostly) no stalls are caused by both threads pre-fetching the data and single records servicing multiple minibatches. Using fewer scan groups reduces the amount of data read, which results in lower magnitude stalls. We observe these stalls with both DALI and PyTorch loaders. 5 RELATED WORK Training Over Large Datasets. Training with massive amounts of parallelism (thus stressing system bandwidth) while achieving near-linear speedup has been the focus of previous work, and it highlights a practical need for efficient data pipelines at scale. A common objective is training mod- els over ImageNet in a record amount of time (Goyal et al., 2017; You et al., 2018; Jia et al., 2018; Ying et al., 2018; Yamazaki et al., 2019). This line of work, while demonstrating immense bandwidth needs, typically keeps data in memory, avoiding storage problems altogether. Recently, the high performance computing community has become interested in training models at massive scale (27k GPUs) (Kurth et al., 2018). Since each GPU matches a disk in bandwidth, the dataset was partitioned among the local memory/storage of the nodes, avoiding the distributed filesystem. Our work attempts to reduce the storage bottleneck altogether, such that anything from a couple disks to a distributed file system could service many GPUs. A separate line of work shows that I/O is a significant bottleneck for certain tasks and proposes optimizing I/O via a set of deep-learning specific optimization to LMDB (Pumma et al., 2019). In contrast, our focus is more on data representation, which is independent of the internals of the storage system. Production systems such as TFX (Baylor et al., 2017) have used custom Protobuf parsers to get 2–5× speedups for simple (e.g., linear) models; these techniques are complementary to ours and reduce loader computational overheads. Dataset Reduction Techniques. The availability of larger datasets has spawned interest in learning algorithms that guaranteed both “good” model accuracy and lower computational complexity. Data reduction techniques, such as sketching, coresets, clustering, and sampling, have been used to reduce the size of a training set (Karnin & Liberty, 2019; Feldman et al., 2013; Liberty, 2013; Woodruff, 2014; Daniely et al., 2017; Kabkab et al., 2016; Bachem et al., 2017). A different approach is to use the unaltered training set, but reduce the size of the active training set to reduce bandwidth requirements (Matsushima et al., 2012). In contrast, we modify the data representation and layout to be more efficient across a wide variety of models. Compression. Finally, the reduction of data size via compression methods is ubiquitous across computer systems. To avoid costly model transmission/storage, prior work compressed neural network models (Han et al., 2016b;a; 2015; Cheng et al., 2017; Xu et al., 2018; Hwang & Sung, 2014; Anwar et al., 2015; Denton et al., 2014). Similarly, dataset distillation (Wang et al., 2018) compresses a model’s parameters into a few training examples. Our work attempts to compress data for training, and not the network itself. Prior work has looked into optimizing training systems by compressing neural network training network traffic (Lim et al., 2019; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). This trend is not specific to machine learning; prior work in databases, computer memories, and the web used compression to reduce system bandwidth requirements (Zukowski et al., 2006; Abadi et al., 2006; Pekhimenko et al., 2018; 2012; Yan et al., 2017; Agababov et al., 2015). Our work focuses on bandwidth for ML data pipelines by utilizing the compression robustness found in most models. Other work modifies models to be able to directly train on compressed representations for the purpose of avoiding decoding or reducing model complexity (Gueguen et al., 2018; Torfason et al., 2018; Fu & Guimaraes, 2016; Ulicny & Dahyot, 2017). Our work differs in motivation, as we do not focus on model computation or make modifications to the models. Previous work has investigated how image degradation (e.g., JPEG artifacts) affect inference (Dodge & Karam, 2016; Vasiljevic et al., 2016; Peng et al., 2016; Zheng et al., 2016); in contrast, our work focuses on the effects of compression on training. 6 CONCLUSION To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems. Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats. We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy. PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches. PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically. While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation. A APPENDIX A.1 LOSS, SPACE SAVINGS, AND ACCURACY PER EPOCH Below, we provide additional experiment plots that were omitted in the main text. Figure 9 and Figure 10 contain the loss over time for the ResNet-18 and ShuffleNetv2 experiments shown in Section 4. Figure 11 extends Figure 5 to show the scan sizes for all datasets. It is worth noting that Top-5 accuracies mirror the Top-1 accuracies trends for ImageNet and Cars. To measure the effect of compression without accounting for time, we show accuracy vs. epoch plots in Figure 12 and Figure 13. While compression can itself be viewed as a data augmentation (e.g., removing high frequency features that can possibly cause overfitting), we notice that it does not usually improve accuracy. Rather, most of the gains in time-to-accuracy are from faster image rates. 0 2000 4000 6000 8000 Time (s) 100 6 × 10 1 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 1600 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 9: Training loss with ResNet-18. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1000 2000 3000 4000 5000 6000 7000 8000 Time (s) 100 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 10: Training loss with ShuffleNetv2. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Si ze (B yt es ) 1e8 (a) ImageNet-100 0 1 2 3 4 5 6 7 8 9 10 Scan 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Si ze (B yt es ) 1e7 (b) HAM10000 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (c) Stanford Cars 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (d) CelebAHQ-Smile Figure 11: The size in bytes of various levels of scans read. Scan group 0 is shown, which contains only labels and is typically ∼100 bytes. Each scan adds roughly a constant amount of data (i.e., linear scaling), although certain scans add considerably more than others (i.e., sizes sometimes cluster) due to techniques like chroma subsampling. Using all 10 scans can require over an order of magnitude more bandwidth than 1–2 scans. A.2 TIME TO CONVERGENCE TABLE We provide a table of time-to-accuracy in Table 1 to help with reading Figure 4 and Figure 6. For Stanford Cars, low numbers of scans do reach accuracies faster than the baseline, but there is a noticeable drop in accuracy. This issue of achieving comparable accuracy for the Cars dataset is further explored in Appendix A.6. A.3 EXPERIMENT SETUP Below we describe details of how the experiments were run, such as hardware characteristics and software configurations. Benchmark Cluster Speeds. As noted in the main text, we utilize a NVIDIA TitanX Graphics Processing Unit (GPU) on each node for the model training. This GPU allows us to train (with FP32/FP16) ResNet-18 at 405/445 images per second and ShuffleNetv2 at 760/750 images per second. With a cached, decoded dataset of 224 × 224 resolution images, we achieve a clusterwide 3625/4050 images per second for ResNet-18 and 6975/7075 images per second for ShuffleNetv2. ImageNet images are around 110kB on average; with 10 GPUs, the cluster can consume 445 megabytes/s (ResNet-18) and 775 megabytes/s (ShuffleNetv2) of storage system bandwidth. GPUs continue to get faster over time, and faster GPUs (or other accelerators) have higher I/O bandwidth demands. Decoding Overhead. Progressive compression has some computational overhead associated with decompression compared to baseline formats. This overhead can grow with the number of scans, and, thus, users of PCRs may be concerned about the trade-offs between decoding overheads and bandwidth savings. First, we note that PCRs can use a large number of scans (e.g., hundreds), but, in practice, useful behavior is observed using only 10 scans (of which we only use 4). Second, the decoding overhead is often a favorable trade-off compared to a storage bottleneck, if one exists. To test this, we a Python microbenchmark that stores a subset of ImageNet data in memory and uses the PIL and OpenCV libraries for decoding. For PIL, we process 230 baseline images per second and 150 progressive images per second. For OpenCV, we process 225 baseline images per second and 165 progressive images per second. Thus, progressive compression with 10 scans adds only around 40–50% computational expense over baseline formats for common implementations. This speed, combined with additional optimizations such as multi-core parallelism (e.g., we would expect 4× these rates with 4 cores), suggests that while decoding can be an issue, the penalty from using progressive images can be managed more easily than a storage bottleneck (i.e., compute can usually be traded for storage bandwidth). Further, some of the decoding can actually be moved to an accelerator, like the GPU used for training, something which is already available via nvJPEG2. Reducing this computational expense by optimizing the implementation or reducing the amount of scans (since our experiments only use 4 distinct scans) is left as future work. Image Loading Rates. We provide image loading rates observed during training in Table 2. Using more scans slows down training significantly, as can be seen in the image rates. It is worth noting that these rates vary considerably during runtime (due to stalls), and ShuffleNetv2 is capable of a higher maximum training rate than ResNet-18. Further, as the number of scans is reduced, image rates approach the maximum achievable by the cluster for each model. A.4 DATASET DETAILS Below we describe the characteristics of the used datasets. ImageNet-100 Creation. The ImageNet-100 dataset was constructed by subsampling 100 classes out of the 1000 classes found in the ImageNet ILSVRC dataset (Deng et al., 2009; Russakovsky et al., 2015). These classes were chosen arbitrarily to limit computation time—they are the first 100 classes of ImageNet in sorted directory listing form i.e., n01440764–n01855672. CelebAHQ-Smile Creation. The CelebAHQ dataset (Karras et al., 2018) was created as a high quality version of the CelebA dataset (Liu et al., 2015). CelebA contains attributes for each face, such as whether the face is smiling or not. CelebAHQ-Smile utilizes these attributes to construct a dataset of 30k faces, where each face is assigned a binary variable for smiling or not. While the CelebA dataset was subsampled to construct CelebAHQ, we do not subsample CelebAHQ further (i.e., we use all 30k images it contains). Record and Image Quality Details. We provide the dataset size details for the encoded datasets in Table 3. As the original (e.g., lossless) images are hard to find, we estimate the JPEQ qual- 2https://developer.nvidia.com/nvjpeg ity setting of the training set with ImageMagick using identify -format ’%Q’. The JPEG quality setting determines the level of frequency quantization outlined in Figure 2. Intuitively, one would expect that higher quality JPEG images could allow more aggressive PCR compression rates for a fixed resolution, since each image has more redundant information on average. ImageNet and HAM10000 both have high quality images. CelebAHQ has lower quality images, but they are downscaled to 256×256 for training purposes, which increases the information density in the image (e.g., blurry images can be made to appear less blurry by downsampling), a fact exploited in prior work (Yan et al., 2017). Cars is neither high JPEG quality or large resolution. Under-compressing images (perhaps at high resolution) during the initial JPEG compression may allow for a larger range of viable scan groups. A.5 RECORD FORMAT CONVERSION TIMES We provide bandwidth-optimized record baselines in Figure 14, where we re-encode the images using a statically-chosen level of compression. These baselines re-encode the images with 50% quality and 90% JPEG quality, respectively, to reduce dataset size at a fixed level of fidelity. It is worth noting that re-encoding images compounds with the original JPEG compression, so the re-encoded image quality may be lower than 50% or 90% quality compared to the images in their original lossless form. This is in contrast to PCRs, which losslessly convert the images into a progressive format, which allows dynamic access to the level of fidelity. We observe that both the baseline method of dataset bandwidth reduction and the PCR method can take considerable encoding time, since the encoding time scales proportionally to the dataset size. We also observe that the PCR method is competitive (1.15× to 2.98×) to that of the baseline in terms of encoding time. PCRs avoid having to re-encode a dataset at multiple fidelity levels, and, therefore, they can save both storage space and encoding time. Converting the full ImageNet into record format takes roughly 16× longer than the 6 minutes needed for the 10× smaller subsampled dataset—the PCR conversion is 96 minutes (53 minutes are spent in JPEG conversion). One reason for this additional slowdown is that any system caches (e.g., in the distributed filesystem or the file cache on the converter node) are less likely to see a cache hit due to the working set size being larger. Although the exact conversion times are dependent on implementation, hardware, and the dataset, conversions times can be in the range of one hour of compute time per 100 GB. A.6 COARSE GRAINED VS. FINE GRAINED CARS EXPERIMENTS We provide experiments validating that compression needs vary within the same dataset for different tasks in Figure 15 and Figure 16, which show accuracy and loss, respectively. This experiment simply coarsens the granularity of the classification task, and demonstrates that lower scan groups can be used for tasks which are easier. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. We can see that compared to the original task, the coarse tasks reduce the gap between scan groups, and the binary task closes the gap even more. This suggests that as the tasks get easier, the tolerance for lower scan groups grows. Simply re-assigning the class labels to a coarser class reduces the complexity of the task and closes the accuracy gap across scan groups. A fixed PCR record encoding (i.e., without re-encoding) can support multiple tasks at the optimal quality, whereas static approaches may need one encoding per task. Some training methodologies, such as Progressive GAN training (Karras et al., 2018), utilize different dataset qualities over the course of training (e.g., training with a course-to-fine quality schedule), and, thus, a single training session may consume dozens of distinct dataset qualities. A.7 IMAGENET-1000 RESULTS We provide the full ImageNet (i.e., 1000 classes) results with ResNet-18 and ShuffeNetv2 in Figure 17. Only group 5 and the baseline are shown, since lower group numbers have difficulty achieving baseline accuracy parity. The results show that PCRs can speed up training by a factor of 2 while retaining accuracy even with large scale (i.e., over 1 million samples) training. 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Is-Corvette 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (e) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (f) Is-Corvette Figure 16: Training loss with ResNet-18 (top) and ShuffleNetv2 (bottom) on a coarser version of the Stanford Cars dataset. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. The gap between scan groups closes as the task is made more simple. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. A.8 IMAGE EXAMPLES BY SCAN We provide image examples from each dataset that illustrate each scan group in Figure 18. Reading more scans, and, thus, data, from a progressive image results in higher fidelity images, but there are diminishing returns. Images can use a remarkably low amount of scan groups without impacting visual quality, which manifests in bandwidth savings if used accordingly. 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (a) ResNet-18 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 70 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (b) ResNet-18 Test Accuracy 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (c) ShuffleNetv2 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (d) ShuffleNetv2 Test Accuracy Figure 17: Training loss and test accuracy with ResNet-18 (top) and ShuffleNetv2 (bottom) on the 1000 class ImageNet Dataset. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown.
1. What is the focus and contribution of the paper on reducing storage bandwidth for training deep neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to reduce storage bandwidth? 3. What are the weaknesses of the paper, especially regarding its comparison with other works in the literature? 4. Do you have any concerns about the validation of the method's superiority over state-of-the-art approaches?
Review
Review This paper introduces Progressive Compressed Records (PCR) which is an on-disk format for fetching and transporting training data in an attempt to reduce the overhead storage bandwidth for training large scale deep neural networks. This is a well written paper that includes all the required background and related works, as well as an easy-to-understand example that runs through the manuscript, explaining what the reader needs to know in order to appreciate the work. The empirical results of several experiments show that the PCR requires up to two times less storage bandwidth while retaining model accuracy. My only concern is that although the related work section provides a thorough survey of the current methods in the literature, the authors did not demonstrate the performance of state-of-the-art and compare their performance with them. I believe this is necessary to truly validate the superiority of their method over state-of-the-art.
ICLR
Title Progressive Compressed Records: Taking a Byte Out of Deep Learning Data Abstract Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. N/A Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. 1 INTRODUCTION Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network). A plethora of work has investigated scaling deep learning from a compute- or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Zhu et al., 2018; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced. Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018). For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018), which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012). This, combined with the memory wall—a lack of bandwidth between compute and memory—suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010). The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes. In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets. Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity. PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application’s needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels. Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level. As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy. Overall, we make the following contributions: 1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks. 2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data. PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth. 3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression. This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance. 2 BACKGROUND Two complementary concepts make up the process of storing training data: the layout of the data on the storage medium and the representation of the data. Data layout is important because it can help fully utilize the bandwidth potential of the underlying storage system. Data representation is important because it can reduce the amount of data transferred per data unit (i.e., a bandwidth requirement reduction). An example of data representation within the scope of this work is compression, which increases the computation per bit—a key property to consider as computation increases faster than bandwidth to storage. Compression may lower image quality by introducing artifacts or blur. Record Layouts. Learning from data requires sampling points from a training set, which can cause small, random accesses that are detrimental to the performance of the storage device. Record layouts, such as TensorFlow’s TFRecords (TFRecords) or MXNet’s ImageRecord (ImageRecord), attempt to alleviate this problem by batching data points together to increase access locality. Batches of training data (i.e., dataset subsets) are then accessed together, amortizing delays in access time across multiple data points. These batches of data are called records. The key to any record layout is the serialization, which is the conversion of data structures into byte streams. Record designs have different performance properties (e.g., space or access time) when written to disk, as shown in Figure 1. Image Compression. Compressed forms are commonly used to represent training data. JPEG (Wallace, 1992) is one of the most popular formats for image compression and is used ubiquitously in machine learning (e.g., Deng et al., 2009; Russakovsky et al., 2015; Lin et al., 2014; Everingham et al., 2010). Most compression formats (including JPEG) only allow for the compression level, i.e., the trade-off between data size and fidelity, to be set at encoding time, which often results in choosing this level independent of the application. This can result in over-compression, which may negatively impact application convergence quality, or under-compression, which results in excess data size, and thus, slower storage system performance. Worse, deep learning pipelines often involve an application-defined post-processing step (e.g., data augmentation), which may further distort an image and obscure the relationship between image fidelity and model accuracy (Bishop, 1995; Karras et al., 2018; Dziugaite et al., 2016; Arnab et al., 2018). While setting encoding-time parameters is unavoidable, the ability to decompress data as it becomes available (i.e., dynamic compression) provides a means to avoid some of the bandwidth expenses of under-compression by simply terminating decompression once sufficient fidelity is reached. In Figure 2, we provide a high-level illustration of the JPEG algorithm, which can be customized to support dynamic compression. First, an image is split into blocks of size 8 × 8. Each block is converted into the frequency domain, such that frequency 0 is the average color of the block, and higher frequencies encode rapid changes in the block. The low frequencies, such as the average value of the block, store the bulk of the perceptually-relevant content in the image (e.g., knowing the block is mostly blue is more important than knowing a white wave is rippling through it). Quantization, which discards information from the block and results in compression, thus prioritizes discarding higher frequencies. The resulting quantized table is then serialized into a flat form. Since data is rendered on a screen from left to right, top to bottom, it makes sense to encode the data in this manner, which results in a sequential format1. Decoding the resulting data is simply a matter of inverting (albeit losslessly) the process that we just described. Progressive Image Compression. Progressive formats allow data to be read at varying degrees of compression without duplication. With the sequential case, data is ordered by blocks, and thus, partially reading the data results in “holes” in the image for unread blocks (Wallace, 1992). Dynamic compression ensures that all blocks get some information (deltas) before revising them (with more deltas). As progressive formats are simply a different traversal of the quantization matrix, with all else being equal, they contain the same information as sequential JPEG (JPEGTran LibJPEG). Progressive JPEG, combined with an additional rearrangement of data, forms the basis of the idea behind PCRs. In Figure 2, non-progressive formats serialize the image matrix in one pass, while progressive formats serialize the matrix in disjoint groups of deltas which are called scans. Scans are ordered by importance (e.g., the first few scans improve fidelity more than subsequent scans). Thus, any references to images generated from scan n will implicitly assume that the image decoder had access to all prior scans (i.e., {scan 1, scan 2, . . . , scan (n− 1)}). The bottom of Figure 2 shows how image fidelity improves from a single scan to utilizing all scans. 3 PROGRESSIVE COMPRESSED RECORDS In this section, we introduce a novel record format for machine learning training called Progressive Compressed Records (PCRs). PCRs are a combination of both layout and data representation. Efficient layouts guarantees that hardware is fully utilized (in terms of bandwidth), while efficient data representations can reduce the total amount of work that is required of the system. To this end, we introduce the concept of scan groups in Section 3.1, which leverage both layout and progressive compression to obtain dynamic compression, allowing both high performance reads while reducing the amount of data read. Using progressive compression, scan groups break images into deltas, which are then rearranged in order to facilitate reduced, yet sequential, data access. In Section 3.2, we discuss how PCRs are implemented, covering both creating PCRs (encoding) and reading them (decoding). The benefits of the PCR implementation boiling down to a bit shuffle are that: 1) PCRs are easy to implement, 2) they are fundamentally lossless, and 3) processing them is fast. As we demonstrate in Section 4, while PCRs can be implemented easily, they manifest in large speedups for a variety of scenarios. Further, PCRs can be generalized beyond images and JPEG. 3.1 SCAN GROUPS Scan groups are a collection of scans (deltas) of the same fidelity. Scan groups combine layout with progressive compression to allow reading subsets of the compressed data with high hardware efficiency. PCRs make the assumption that the entire training data will be read at the same fidelity. Using this assumption, scan groups rearrange the data such that all deltas of the same fidelity are grouped together. This, in turn, enables groups of deltas to be read together sequentially, which creates dynamicity in the decoding process. Since scans are sorted by importance, and scan groups are a set of scans, the scan groups are also sorted by importance. To paint a clear representation of how scan groups work, we point the reader to Figure 3. PCRs begin with some metadata which is assumed to be needed by all machine learning tasks, such as labels or 1“Sequential” refers to in-memory and should not be confused with sequential on-disk access. bounding boxes. In practice, metadata is small in size, and, thus, the space overheads are negligible. The metadata is followed by scan groups, which consist of scans. The scan 1 representation of the shark in Figure 2 will be available in its record once data is read up to offset 1. Likewise, the scan 3 representation will be available once the record is read up to offset 3, and the representation will be more crisp as 3 scans were used per image, rather than 1. Reading up to the end of the record yields the most complete representation of the image. As scan groups consist of groups of the same fidelity, every image contained in a record is available at the same fidelity at the same group offset. Users of PCRs can read data at a certain scan fidelity by simply reading the on-disk byte stream from the start of the PCR (i.e., offset 0) to the byte offset corresponding to the corresponding scan group. Partially reading the records results in bandwidth savings without re-encoding the data. 3.2 IMPLEMENTATION There are two fundamental PCR implementation details: the encoding process and the decoding process. The encoding process transforms a set of JPEG files into a directory, which contains 1) a database for PCR metadata and 2) at least one .pcr file. The decoding process, which takes the directory as input and yields a set of JPEG images, efficiently inverts a subset of the encoding. The dataset is split into many PCRs, and, thus, the training process is reading tens to hundreds of .pcr files per epoch. The data loader is where the PCR decoding library interfaces with the inputs provided to deep learning libraries (e.g., TensorFlow (Abadi et al., 2015), MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2017)). Below, we describe how each of these steps is done. Encoding. Given a set of images, the PCR encoder must break the images into scans, group the scans into scan groups, and sort the scan groups by fidelity. Once the groups are sorted, the PCR encoder can serialize the groups while taking note of their offsets (so that subsets may later be decoded). The metadata (e.g., labels) is prepended to the serialized representation, and the serialized representation is written to disk. We focus on grouping JPEG due to its generality, but PCRs can use any dataset-level progressive format. Images can be decomposed in both space and fidelity; other data modalities (e.g., video) may also have time. Our implementation uses JPEGTRAN (JPEGTran Man Page) to losslessly transform the set of JPEG images into a set of progressive JPEG images. With the default settings, each JPEG is broken up into 10 scans. The encoder scans the binary representation of the progressive JPEG files, searching for the markers that designate the end of a scan group. The encoder thus has access to all 10 offsets within the JPEG files that can be used to determine the boundaries between scan regions. Forming scan groups requires grouping the scan regions with the same fidelity together, which can be done in one pass over the set of images corresponding to that PCR. This grouping must be reversible, as the decoding process will un-group the scans to reconstruct the original images. This grouping can be done with existing serialization libraries. We use Protobuf (Protobuf) to serialize the groups as well as the labels. However, it is key that every group (and the metadata) be serialized as a separate message, as Protobuf can rearrange the contents within a message, and thus can rearrange the ordering of the groups themselves. We finally concatenate the contents of the messages and write them out as one file. As shown in Appendix A.5, any record format conversion can be expensive; PCRs benefit from requiring only a single conversion for multiple tasks. Decoding. To decode a PCR file, one has to first lookup the file’s scan group offsets in the database. The offsets provide sufficient information to do a partial read of the file (e.g., instead of reading the entire file, we read only enough bytes to read up to the desired scan group). Decoding the JPEGs requires inverting the PCR scan-group grouping process for the available scan-groups prior to JPEG decode. Since we are missing scan-groups, we terminate the byte stream with an End-of-Image (EOI) JPEG token—this technique allows most JPEG decoders to render the byte stream with only the available subset of scans. The bulk of the inverse conversion is done in 150 lines of C++ code. Loader. We implemented PCR loaders using PyTorch’s dataloader as well as DALI (NVIDIA, 2018)’s ExternalSource operator to return batches of images at a configurable fidelity (with the corresponding labels). We find that a pipeline abstraction simplifies loader design, since recordbased datasets can be easily iterated sequentially. In contrast, the PyTorch Dataloader abstrac- tion, which assumes that we can index randomly into an in-memory data structure (e.g., i = RandInt(0, n); (x, y) = data[i];), is harder to use for constantly fetching record formats off disk. Our implementation, while being only several hundred lines of code, obtains image rates that are competitive (e.g., faster/slower depending on number of scans) with the included DALI TFRecord loader, showing that PCRs can be implemented efficiently (i.e., fast enough to rarely bottleneck data loading) with a low amount of engineering effort. 4 EXPERIMENTS This section presents our evaluation of PCRs using a suite of large-scale image datasets. As large images are more taxing to a system’s network and storage, our evaluation focuses on datasets with high-resolution images. We describe our experimental setup in Section 4.1. We present our evaluation results in Section 4.2, showing that halving data bandwidth per image results in comparable accuracy but with half the training time. In Section 4.3, we analyze the intuitive relationship between objective measures of image fidelity and time-to-accuracy. Finally, in Section 4.4, we present results that trace the training time speedups to the data loading times themselves. 4.1 EVALUATION SETUP Our evaluation uses the ImageNet ILSVRC (Deng et al., 2009; Russakovsky et al., 2015), HAM10000 (Tschandl et al., 2018), Stanford Cars (Krause et al., 2013), and CelebA-HQ (Karras et al., 2018) datasets, which are described below. See Appendix A.4 for additional details. Datasets. • ImageNet-100 ImageNet provides a wide diversity of classes, of which we focus on the first 100 to make training times more tractable. Since classes are roughly ordered by ImageNet categories, this results in a fine-grained, i.e., hard to classify, multiclass task. We convert the dataset into PCRs in batches of 1024, which results in 126 PCRs. We use the full ImageNet dataset in Appendix A.7. • HAM10000 We split the HAM10000 dataset randomly 80%/20% between train and test. We convert the dataset into PCRs in batches of 64, which results in 125 PCRs of similar size as the ones used for ImageNet-100. • Stanford Cars The Stanford Cars dataset is another fine-grained classification dataset, since all images are cars, and there are 196 classes spread over 16k images. We believe this dataset highlights some of the worst-case training scenarios, as it is considerably easier to predict highly compressed variants of unrelated images by exploiting low frequency image statistics (e.g., planes vs. frogs). We explore a coarse-grained version of Cars in Appendix A.6. Cars has 63 PCRs. • CelebAHQ-Smile CelebA-HQ is a high-resolution derivative of the CelebA dataset (Liu et al., 2015), which consists of 30k celebrity faces at 10242. We use the annotations provided by CelebA to construct a smiling or not smiling dataset. We split the 30k dataset into 80%/20% train/test, and we convert the training set into 93 PCRs. All datasets utilize resizing, crop, and horizontal-flip augmentations, as is standard for ImageNet training. We provide examples of scan groups for these datasets in Appendix A.8. Training Regime. We use pretrained ImageNet weights for HAM10000 and Cars due to the limited amount of training data. We use standard ImageNet training, starting the learning rate at 0.1 (with gradual warmup (Goyal et al., 2017)) and dropping it on epoch 30 and 60 by 10×. After augmentations, all inputs are of size 224× 224. The pretrained experiments (HAM10000 and Cars) start at a learning rate of 0.01 to avoid changing the initialization too aggressively. We use fp16 training (Micikevicius et al., 2018) as it results in an additional 10% images per second (see Appendix A.3). We use a ResNet18 (He et al., 2016) and ShuffleNetv2 (Ma et al., 2018) architecture for our experiments with a batch size of 128 per each worker. We run each experiment at least 3 times to obtain confidence intervals given different random seeds and sources of non-determinism such as multi-threading and I/O. System Setup. We run distributed experiments on a 16-node Ceph (Weil et al., 2006) cluster connected with a Cisco Nexus 3264-Q 64-port QSFP+ 40GbE switch. Each node has a 16– core Intel E5–2698Bv3 Xeon 2GHz CPU, 64GiB RAM, NVIDIA TitanX, 4TB 7200RPM Seagate ST4000NM0023 HDD, and a Mellanox MCX314A-BCCT 40GbE NIC. All nodes run Linux kernel 4.15 on Ubuntu 18.04, CUDA10, and the Luminous release (v12.2.12) of Ceph. We use six of the nodes as Ceph nodes; five nodes are dedicated as storage nodes in the form of Object Storage Devices (OSDs), and one node is used as a Ceph metadata server (MDS). The remaining 10 nodes are used as machine learning workers for the training process. This means there is a 2:1 ratio between compute and storage nodes. We use PyTorch (Paszke et al., 2017) (v1.12) with NVIDIA Apex (Apex) (v0.1) and NVIDIA DALI (NVIDIA, 2018) (v0.14.0). We use at least four worker threads to prefetch data in the loader. While we focus on this particular distributed setting, we observe similar time-to-accuracy gains on a single machine with eight GPUs sharing the same disk, and we believe the results will generalize to different setups. 4.2 TIME TO ACCURACY The time-to-accuracy results for ResNet18 training are presented in Figure 4, while those of ShuffleNetv2 are presented in Figure 6. See Appendix A.2 for a tabular view and Appendix A.1 for the corresponding training loss results. All scan groups within a dataset were run for the same amount of epochs, so lower scan groups finish earlier. 90 epochs are shown for ImageNet, 150 epochs are shown for HAM10000, 250 epochs are shown for Stanford Cars, and 90 epochs are shown for CelebAHQ-Smile. We sample the test accuracy every 15 epochs for non-ImageNet datasets to reduce interference with training measurements. To avoid measuring implementation differences with other loaders, our evaluation focuses on the differences obtained by reading various amounts of scan groups. Reading all the data (up to scan group 10) is the baseline. First, we note that for all datasets, except for Cars, PCRs provide a 2× boost to time-to-accuracy compared to the baseline. The reason for this speedup is that lower scan groups are smaller. As shown in Figure 5, scan group 5 is roughly half the size of the baseline, and scan group 1 is a fifth of scan group 5 (i.e., a potential 10× bandwidth savings). This trend holds across datasets (see Appendix A.1). As we will discuss in Section 4.4, the space savings manifest in reduced dataloader latencies. Second, we note that there is an inherent trade-off between convergence quality and the speedup attained by using less storage resources. In general, although lower fidelity scan groups allow the system to operate more efficiently, they do so at the expense of model convergence. Scan group 1, the lowest fidelity scan, performs poorly, especially on Cars, where fine-grained details are important. Scan groups limit the maximum achievable accuracy on a task; if learning plateaus prematurely, applications should raise the scan group in a manner similar to dropping the learning rate. Third, the relative rankings of scan groups are relatively stable across models and datasets, which reduces tuning efforts in choosing the appropriate scan group. We further relate these rankings to the fidelity of the scan groups in Section 4.3. Our conclusion is that, for most datasets, scan group 5 costs half as much in terms of bandwidth, but reaches the same level of test accuracy as the baseline—thus, it is a good default. This is most apparent for ImageNet and HAM10000, which are challenging enough for small variations in image fidelity to make a commensurate difference in test accuracy. In contrast, Cars is too fine-grained to allow images to be degraded, and CelebAHQ-Smile is too coarse-grained for image degradation to matter. 4.3 THE RELATIONSHIP BETWEEN IMAGE FIDELITY AND TEST ACCURACY We use MSSIM (Wang et al., 2003), a standard measure of image similarity, to compare how various scans approximate the reference image, and we show the results in Figure 7. We find that there is a strong connection between MSSIM and the resulting final test accuracy, especially when comparing scan groups within a task. Our preliminary tests demonstrate that scan groups that have very similar MSSIM perform very similarly, which is why only groups 1, 2, 5, and the baseline are shown. Due to the way progressive JPEG is coded by default, groups tend to cluster (e.g., 2, 3, and 4 are usually similar, while 5 introduces a difference). We note that MSSIM being poor (e.g., scan group 1 for cars) or MSSIM being close to baseline (e.g., scan group 5 for HAM10000) are good predictors of relative test accuracy within tasks. MSSIM can therefore be used as a diagnostic for choosing scans. 4.4 THE RELATIONSHIP BETWEEN SCANS AND DATA STALLS The datasets we evaluated show that data loading can slow down the training process. To highlight these slowdowns, and the improvements PCRs achieve by not using all scan groups, we present the loading time of data for the ResNet18 ImageNet-100 run in Figure 8. We obtain similar results for the other datasets. The baseline of using all scan group results in high periodic loading stalls, where the prefetching queue was drained. Upon blocking, training cannot proceed until the worker threads obtain a full batch of data. Periods of (mostly) no stalls are caused by both threads pre-fetching the data and single records servicing multiple minibatches. Using fewer scan groups reduces the amount of data read, which results in lower magnitude stalls. We observe these stalls with both DALI and PyTorch loaders. 5 RELATED WORK Training Over Large Datasets. Training with massive amounts of parallelism (thus stressing system bandwidth) while achieving near-linear speedup has been the focus of previous work, and it highlights a practical need for efficient data pipelines at scale. A common objective is training mod- els over ImageNet in a record amount of time (Goyal et al., 2017; You et al., 2018; Jia et al., 2018; Ying et al., 2018; Yamazaki et al., 2019). This line of work, while demonstrating immense bandwidth needs, typically keeps data in memory, avoiding storage problems altogether. Recently, the high performance computing community has become interested in training models at massive scale (27k GPUs) (Kurth et al., 2018). Since each GPU matches a disk in bandwidth, the dataset was partitioned among the local memory/storage of the nodes, avoiding the distributed filesystem. Our work attempts to reduce the storage bottleneck altogether, such that anything from a couple disks to a distributed file system could service many GPUs. A separate line of work shows that I/O is a significant bottleneck for certain tasks and proposes optimizing I/O via a set of deep-learning specific optimization to LMDB (Pumma et al., 2019). In contrast, our focus is more on data representation, which is independent of the internals of the storage system. Production systems such as TFX (Baylor et al., 2017) have used custom Protobuf parsers to get 2–5× speedups for simple (e.g., linear) models; these techniques are complementary to ours and reduce loader computational overheads. Dataset Reduction Techniques. The availability of larger datasets has spawned interest in learning algorithms that guaranteed both “good” model accuracy and lower computational complexity. Data reduction techniques, such as sketching, coresets, clustering, and sampling, have been used to reduce the size of a training set (Karnin & Liberty, 2019; Feldman et al., 2013; Liberty, 2013; Woodruff, 2014; Daniely et al., 2017; Kabkab et al., 2016; Bachem et al., 2017). A different approach is to use the unaltered training set, but reduce the size of the active training set to reduce bandwidth requirements (Matsushima et al., 2012). In contrast, we modify the data representation and layout to be more efficient across a wide variety of models. Compression. Finally, the reduction of data size via compression methods is ubiquitous across computer systems. To avoid costly model transmission/storage, prior work compressed neural network models (Han et al., 2016b;a; 2015; Cheng et al., 2017; Xu et al., 2018; Hwang & Sung, 2014; Anwar et al., 2015; Denton et al., 2014). Similarly, dataset distillation (Wang et al., 2018) compresses a model’s parameters into a few training examples. Our work attempts to compress data for training, and not the network itself. Prior work has looked into optimizing training systems by compressing neural network training network traffic (Lim et al., 2019; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). This trend is not specific to machine learning; prior work in databases, computer memories, and the web used compression to reduce system bandwidth requirements (Zukowski et al., 2006; Abadi et al., 2006; Pekhimenko et al., 2018; 2012; Yan et al., 2017; Agababov et al., 2015). Our work focuses on bandwidth for ML data pipelines by utilizing the compression robustness found in most models. Other work modifies models to be able to directly train on compressed representations for the purpose of avoiding decoding or reducing model complexity (Gueguen et al., 2018; Torfason et al., 2018; Fu & Guimaraes, 2016; Ulicny & Dahyot, 2017). Our work differs in motivation, as we do not focus on model computation or make modifications to the models. Previous work has investigated how image degradation (e.g., JPEG artifacts) affect inference (Dodge & Karam, 2016; Vasiljevic et al., 2016; Peng et al., 2016; Zheng et al., 2016); in contrast, our work focuses on the effects of compression on training. 6 CONCLUSION To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems. Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats. We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy. PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches. PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically. While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation. A APPENDIX A.1 LOSS, SPACE SAVINGS, AND ACCURACY PER EPOCH Below, we provide additional experiment plots that were omitted in the main text. Figure 9 and Figure 10 contain the loss over time for the ResNet-18 and ShuffleNetv2 experiments shown in Section 4. Figure 11 extends Figure 5 to show the scan sizes for all datasets. It is worth noting that Top-5 accuracies mirror the Top-1 accuracies trends for ImageNet and Cars. To measure the effect of compression without accounting for time, we show accuracy vs. epoch plots in Figure 12 and Figure 13. While compression can itself be viewed as a data augmentation (e.g., removing high frequency features that can possibly cause overfitting), we notice that it does not usually improve accuracy. Rather, most of the gains in time-to-accuracy are from faster image rates. 0 2000 4000 6000 8000 Time (s) 100 6 × 10 1 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 1600 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 9: Training loss with ResNet-18. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1000 2000 3000 4000 5000 6000 7000 8000 Time (s) 100 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 10: Training loss with ShuffleNetv2. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Si ze (B yt es ) 1e8 (a) ImageNet-100 0 1 2 3 4 5 6 7 8 9 10 Scan 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Si ze (B yt es ) 1e7 (b) HAM10000 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (c) Stanford Cars 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (d) CelebAHQ-Smile Figure 11: The size in bytes of various levels of scans read. Scan group 0 is shown, which contains only labels and is typically ∼100 bytes. Each scan adds roughly a constant amount of data (i.e., linear scaling), although certain scans add considerably more than others (i.e., sizes sometimes cluster) due to techniques like chroma subsampling. Using all 10 scans can require over an order of magnitude more bandwidth than 1–2 scans. A.2 TIME TO CONVERGENCE TABLE We provide a table of time-to-accuracy in Table 1 to help with reading Figure 4 and Figure 6. For Stanford Cars, low numbers of scans do reach accuracies faster than the baseline, but there is a noticeable drop in accuracy. This issue of achieving comparable accuracy for the Cars dataset is further explored in Appendix A.6. A.3 EXPERIMENT SETUP Below we describe details of how the experiments were run, such as hardware characteristics and software configurations. Benchmark Cluster Speeds. As noted in the main text, we utilize a NVIDIA TitanX Graphics Processing Unit (GPU) on each node for the model training. This GPU allows us to train (with FP32/FP16) ResNet-18 at 405/445 images per second and ShuffleNetv2 at 760/750 images per second. With a cached, decoded dataset of 224 × 224 resolution images, we achieve a clusterwide 3625/4050 images per second for ResNet-18 and 6975/7075 images per second for ShuffleNetv2. ImageNet images are around 110kB on average; with 10 GPUs, the cluster can consume 445 megabytes/s (ResNet-18) and 775 megabytes/s (ShuffleNetv2) of storage system bandwidth. GPUs continue to get faster over time, and faster GPUs (or other accelerators) have higher I/O bandwidth demands. Decoding Overhead. Progressive compression has some computational overhead associated with decompression compared to baseline formats. This overhead can grow with the number of scans, and, thus, users of PCRs may be concerned about the trade-offs between decoding overheads and bandwidth savings. First, we note that PCRs can use a large number of scans (e.g., hundreds), but, in practice, useful behavior is observed using only 10 scans (of which we only use 4). Second, the decoding overhead is often a favorable trade-off compared to a storage bottleneck, if one exists. To test this, we a Python microbenchmark that stores a subset of ImageNet data in memory and uses the PIL and OpenCV libraries for decoding. For PIL, we process 230 baseline images per second and 150 progressive images per second. For OpenCV, we process 225 baseline images per second and 165 progressive images per second. Thus, progressive compression with 10 scans adds only around 40–50% computational expense over baseline formats for common implementations. This speed, combined with additional optimizations such as multi-core parallelism (e.g., we would expect 4× these rates with 4 cores), suggests that while decoding can be an issue, the penalty from using progressive images can be managed more easily than a storage bottleneck (i.e., compute can usually be traded for storage bandwidth). Further, some of the decoding can actually be moved to an accelerator, like the GPU used for training, something which is already available via nvJPEG2. Reducing this computational expense by optimizing the implementation or reducing the amount of scans (since our experiments only use 4 distinct scans) is left as future work. Image Loading Rates. We provide image loading rates observed during training in Table 2. Using more scans slows down training significantly, as can be seen in the image rates. It is worth noting that these rates vary considerably during runtime (due to stalls), and ShuffleNetv2 is capable of a higher maximum training rate than ResNet-18. Further, as the number of scans is reduced, image rates approach the maximum achievable by the cluster for each model. A.4 DATASET DETAILS Below we describe the characteristics of the used datasets. ImageNet-100 Creation. The ImageNet-100 dataset was constructed by subsampling 100 classes out of the 1000 classes found in the ImageNet ILSVRC dataset (Deng et al., 2009; Russakovsky et al., 2015). These classes were chosen arbitrarily to limit computation time—they are the first 100 classes of ImageNet in sorted directory listing form i.e., n01440764–n01855672. CelebAHQ-Smile Creation. The CelebAHQ dataset (Karras et al., 2018) was created as a high quality version of the CelebA dataset (Liu et al., 2015). CelebA contains attributes for each face, such as whether the face is smiling or not. CelebAHQ-Smile utilizes these attributes to construct a dataset of 30k faces, where each face is assigned a binary variable for smiling or not. While the CelebA dataset was subsampled to construct CelebAHQ, we do not subsample CelebAHQ further (i.e., we use all 30k images it contains). Record and Image Quality Details. We provide the dataset size details for the encoded datasets in Table 3. As the original (e.g., lossless) images are hard to find, we estimate the JPEQ qual- 2https://developer.nvidia.com/nvjpeg ity setting of the training set with ImageMagick using identify -format ’%Q’. The JPEG quality setting determines the level of frequency quantization outlined in Figure 2. Intuitively, one would expect that higher quality JPEG images could allow more aggressive PCR compression rates for a fixed resolution, since each image has more redundant information on average. ImageNet and HAM10000 both have high quality images. CelebAHQ has lower quality images, but they are downscaled to 256×256 for training purposes, which increases the information density in the image (e.g., blurry images can be made to appear less blurry by downsampling), a fact exploited in prior work (Yan et al., 2017). Cars is neither high JPEG quality or large resolution. Under-compressing images (perhaps at high resolution) during the initial JPEG compression may allow for a larger range of viable scan groups. A.5 RECORD FORMAT CONVERSION TIMES We provide bandwidth-optimized record baselines in Figure 14, where we re-encode the images using a statically-chosen level of compression. These baselines re-encode the images with 50% quality and 90% JPEG quality, respectively, to reduce dataset size at a fixed level of fidelity. It is worth noting that re-encoding images compounds with the original JPEG compression, so the re-encoded image quality may be lower than 50% or 90% quality compared to the images in their original lossless form. This is in contrast to PCRs, which losslessly convert the images into a progressive format, which allows dynamic access to the level of fidelity. We observe that both the baseline method of dataset bandwidth reduction and the PCR method can take considerable encoding time, since the encoding time scales proportionally to the dataset size. We also observe that the PCR method is competitive (1.15× to 2.98×) to that of the baseline in terms of encoding time. PCRs avoid having to re-encode a dataset at multiple fidelity levels, and, therefore, they can save both storage space and encoding time. Converting the full ImageNet into record format takes roughly 16× longer than the 6 minutes needed for the 10× smaller subsampled dataset—the PCR conversion is 96 minutes (53 minutes are spent in JPEG conversion). One reason for this additional slowdown is that any system caches (e.g., in the distributed filesystem or the file cache on the converter node) are less likely to see a cache hit due to the working set size being larger. Although the exact conversion times are dependent on implementation, hardware, and the dataset, conversions times can be in the range of one hour of compute time per 100 GB. A.6 COARSE GRAINED VS. FINE GRAINED CARS EXPERIMENTS We provide experiments validating that compression needs vary within the same dataset for different tasks in Figure 15 and Figure 16, which show accuracy and loss, respectively. This experiment simply coarsens the granularity of the classification task, and demonstrates that lower scan groups can be used for tasks which are easier. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. We can see that compared to the original task, the coarse tasks reduce the gap between scan groups, and the binary task closes the gap even more. This suggests that as the tasks get easier, the tolerance for lower scan groups grows. Simply re-assigning the class labels to a coarser class reduces the complexity of the task and closes the accuracy gap across scan groups. A fixed PCR record encoding (i.e., without re-encoding) can support multiple tasks at the optimal quality, whereas static approaches may need one encoding per task. Some training methodologies, such as Progressive GAN training (Karras et al., 2018), utilize different dataset qualities over the course of training (e.g., training with a course-to-fine quality schedule), and, thus, a single training session may consume dozens of distinct dataset qualities. A.7 IMAGENET-1000 RESULTS We provide the full ImageNet (i.e., 1000 classes) results with ResNet-18 and ShuffeNetv2 in Figure 17. Only group 5 and the baseline are shown, since lower group numbers have difficulty achieving baseline accuracy parity. The results show that PCRs can speed up training by a factor of 2 while retaining accuracy even with large scale (i.e., over 1 million samples) training. 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Is-Corvette 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (e) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (f) Is-Corvette Figure 16: Training loss with ResNet-18 (top) and ShuffleNetv2 (bottom) on a coarser version of the Stanford Cars dataset. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. The gap between scan groups closes as the task is made more simple. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. A.8 IMAGE EXAMPLES BY SCAN We provide image examples from each dataset that illustrate each scan group in Figure 18. Reading more scans, and, thus, data, from a progressive image results in higher fidelity images, but there are diminishing returns. Images can use a remarkably low amount of scan groups without impacting visual quality, which manifests in bandwidth savings if used accordingly. 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (a) ResNet-18 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 70 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (b) ResNet-18 Test Accuracy 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (c) ShuffleNetv2 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (d) ShuffleNetv2 Test Accuracy Figure 17: Training loss and test accuracy with ResNet-18 (top) and ShuffleNetv2 (bottom) on the 1000 class ImageNet Dataset. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown.
1. What is the main contribution of the paper regarding image processing? 2. What are the potential limitations of the proposed approach, particularly in terms of reading speed and training efficiency? 3. Do you have any suggestions for additional experiments or analyses that could enhance our understanding of the method's effectiveness? 4. How does the reviewer assess the clarity and quality of the presented results, specifically regarding Figure 3? 5. Are there any concerns regarding the choice of dataset used in the experiments? 6. Have the authors explored alternative image compression formats, and how does their proposed method relate to existing compression techniques?
Review
Review The paper proposes using progressive encoding of images and re-arrange of data blocks in images to improve reading speed and therefore training speed. To fully analyze the maximum possible speed of training, it would be great to the measure upper bound of images/sec, when avoiding reading from disk and just using images from memory. Decoding a typical progressive JPEG image usually takes about 2-3 times as much time as decoding a non-progressive JPEG, for full resolution, analyzing the time to read vs time to decode the images would be great. It is not clear how changing the number of total groups would affect the image size and the reading speed. Based on the current experiments it is not clear what is the impact of the batch size when creating PCRs and when reading the image blocks, or the impact of the batch size on the training speed. Figure 3 is really hard to read and compare times to convergence, authors should provide a table with times to X% accuracy. Although time to convergence is the key metric, it would be great to know the difference in images/sec of different settings. Using ImageNet 100 classes (not clear how the 100 classes were chosen) instead of the usual 1000 classes, can distort the results, since it is not clear if higher resolution would be needed to distinguish more classes or not. Have the authors considered other image compression formats like WebP? How tie is the proposed record encoding with the image compression?
ICLR
Title Progressive Compressed Records: Taking a Byte Out of Deep Learning Data Abstract Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. N/A Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity—all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. 1 INTRODUCTION Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network). A plethora of work has investigated scaling deep learning from a compute- or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Zhu et al., 2018; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced. Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018). For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018), which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012). This, combined with the memory wall—a lack of bandwidth between compute and memory—suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010). The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes. In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets. Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity. PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application’s needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels. Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level. As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy. Overall, we make the following contributions: 1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks. 2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data. PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth. 3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression. This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance. 2 BACKGROUND Two complementary concepts make up the process of storing training data: the layout of the data on the storage medium and the representation of the data. Data layout is important because it can help fully utilize the bandwidth potential of the underlying storage system. Data representation is important because it can reduce the amount of data transferred per data unit (i.e., a bandwidth requirement reduction). An example of data representation within the scope of this work is compression, which increases the computation per bit—a key property to consider as computation increases faster than bandwidth to storage. Compression may lower image quality by introducing artifacts or blur. Record Layouts. Learning from data requires sampling points from a training set, which can cause small, random accesses that are detrimental to the performance of the storage device. Record layouts, such as TensorFlow’s TFRecords (TFRecords) or MXNet’s ImageRecord (ImageRecord), attempt to alleviate this problem by batching data points together to increase access locality. Batches of training data (i.e., dataset subsets) are then accessed together, amortizing delays in access time across multiple data points. These batches of data are called records. The key to any record layout is the serialization, which is the conversion of data structures into byte streams. Record designs have different performance properties (e.g., space or access time) when written to disk, as shown in Figure 1. Image Compression. Compressed forms are commonly used to represent training data. JPEG (Wallace, 1992) is one of the most popular formats for image compression and is used ubiquitously in machine learning (e.g., Deng et al., 2009; Russakovsky et al., 2015; Lin et al., 2014; Everingham et al., 2010). Most compression formats (including JPEG) only allow for the compression level, i.e., the trade-off between data size and fidelity, to be set at encoding time, which often results in choosing this level independent of the application. This can result in over-compression, which may negatively impact application convergence quality, or under-compression, which results in excess data size, and thus, slower storage system performance. Worse, deep learning pipelines often involve an application-defined post-processing step (e.g., data augmentation), which may further distort an image and obscure the relationship between image fidelity and model accuracy (Bishop, 1995; Karras et al., 2018; Dziugaite et al., 2016; Arnab et al., 2018). While setting encoding-time parameters is unavoidable, the ability to decompress data as it becomes available (i.e., dynamic compression) provides a means to avoid some of the bandwidth expenses of under-compression by simply terminating decompression once sufficient fidelity is reached. In Figure 2, we provide a high-level illustration of the JPEG algorithm, which can be customized to support dynamic compression. First, an image is split into blocks of size 8 × 8. Each block is converted into the frequency domain, such that frequency 0 is the average color of the block, and higher frequencies encode rapid changes in the block. The low frequencies, such as the average value of the block, store the bulk of the perceptually-relevant content in the image (e.g., knowing the block is mostly blue is more important than knowing a white wave is rippling through it). Quantization, which discards information from the block and results in compression, thus prioritizes discarding higher frequencies. The resulting quantized table is then serialized into a flat form. Since data is rendered on a screen from left to right, top to bottom, it makes sense to encode the data in this manner, which results in a sequential format1. Decoding the resulting data is simply a matter of inverting (albeit losslessly) the process that we just described. Progressive Image Compression. Progressive formats allow data to be read at varying degrees of compression without duplication. With the sequential case, data is ordered by blocks, and thus, partially reading the data results in “holes” in the image for unread blocks (Wallace, 1992). Dynamic compression ensures that all blocks get some information (deltas) before revising them (with more deltas). As progressive formats are simply a different traversal of the quantization matrix, with all else being equal, they contain the same information as sequential JPEG (JPEGTran LibJPEG). Progressive JPEG, combined with an additional rearrangement of data, forms the basis of the idea behind PCRs. In Figure 2, non-progressive formats serialize the image matrix in one pass, while progressive formats serialize the matrix in disjoint groups of deltas which are called scans. Scans are ordered by importance (e.g., the first few scans improve fidelity more than subsequent scans). Thus, any references to images generated from scan n will implicitly assume that the image decoder had access to all prior scans (i.e., {scan 1, scan 2, . . . , scan (n− 1)}). The bottom of Figure 2 shows how image fidelity improves from a single scan to utilizing all scans. 3 PROGRESSIVE COMPRESSED RECORDS In this section, we introduce a novel record format for machine learning training called Progressive Compressed Records (PCRs). PCRs are a combination of both layout and data representation. Efficient layouts guarantees that hardware is fully utilized (in terms of bandwidth), while efficient data representations can reduce the total amount of work that is required of the system. To this end, we introduce the concept of scan groups in Section 3.1, which leverage both layout and progressive compression to obtain dynamic compression, allowing both high performance reads while reducing the amount of data read. Using progressive compression, scan groups break images into deltas, which are then rearranged in order to facilitate reduced, yet sequential, data access. In Section 3.2, we discuss how PCRs are implemented, covering both creating PCRs (encoding) and reading them (decoding). The benefits of the PCR implementation boiling down to a bit shuffle are that: 1) PCRs are easy to implement, 2) they are fundamentally lossless, and 3) processing them is fast. As we demonstrate in Section 4, while PCRs can be implemented easily, they manifest in large speedups for a variety of scenarios. Further, PCRs can be generalized beyond images and JPEG. 3.1 SCAN GROUPS Scan groups are a collection of scans (deltas) of the same fidelity. Scan groups combine layout with progressive compression to allow reading subsets of the compressed data with high hardware efficiency. PCRs make the assumption that the entire training data will be read at the same fidelity. Using this assumption, scan groups rearrange the data such that all deltas of the same fidelity are grouped together. This, in turn, enables groups of deltas to be read together sequentially, which creates dynamicity in the decoding process. Since scans are sorted by importance, and scan groups are a set of scans, the scan groups are also sorted by importance. To paint a clear representation of how scan groups work, we point the reader to Figure 3. PCRs begin with some metadata which is assumed to be needed by all machine learning tasks, such as labels or 1“Sequential” refers to in-memory and should not be confused with sequential on-disk access. bounding boxes. In practice, metadata is small in size, and, thus, the space overheads are negligible. The metadata is followed by scan groups, which consist of scans. The scan 1 representation of the shark in Figure 2 will be available in its record once data is read up to offset 1. Likewise, the scan 3 representation will be available once the record is read up to offset 3, and the representation will be more crisp as 3 scans were used per image, rather than 1. Reading up to the end of the record yields the most complete representation of the image. As scan groups consist of groups of the same fidelity, every image contained in a record is available at the same fidelity at the same group offset. Users of PCRs can read data at a certain scan fidelity by simply reading the on-disk byte stream from the start of the PCR (i.e., offset 0) to the byte offset corresponding to the corresponding scan group. Partially reading the records results in bandwidth savings without re-encoding the data. 3.2 IMPLEMENTATION There are two fundamental PCR implementation details: the encoding process and the decoding process. The encoding process transforms a set of JPEG files into a directory, which contains 1) a database for PCR metadata and 2) at least one .pcr file. The decoding process, which takes the directory as input and yields a set of JPEG images, efficiently inverts a subset of the encoding. The dataset is split into many PCRs, and, thus, the training process is reading tens to hundreds of .pcr files per epoch. The data loader is where the PCR decoding library interfaces with the inputs provided to deep learning libraries (e.g., TensorFlow (Abadi et al., 2015), MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2017)). Below, we describe how each of these steps is done. Encoding. Given a set of images, the PCR encoder must break the images into scans, group the scans into scan groups, and sort the scan groups by fidelity. Once the groups are sorted, the PCR encoder can serialize the groups while taking note of their offsets (so that subsets may later be decoded). The metadata (e.g., labels) is prepended to the serialized representation, and the serialized representation is written to disk. We focus on grouping JPEG due to its generality, but PCRs can use any dataset-level progressive format. Images can be decomposed in both space and fidelity; other data modalities (e.g., video) may also have time. Our implementation uses JPEGTRAN (JPEGTran Man Page) to losslessly transform the set of JPEG images into a set of progressive JPEG images. With the default settings, each JPEG is broken up into 10 scans. The encoder scans the binary representation of the progressive JPEG files, searching for the markers that designate the end of a scan group. The encoder thus has access to all 10 offsets within the JPEG files that can be used to determine the boundaries between scan regions. Forming scan groups requires grouping the scan regions with the same fidelity together, which can be done in one pass over the set of images corresponding to that PCR. This grouping must be reversible, as the decoding process will un-group the scans to reconstruct the original images. This grouping can be done with existing serialization libraries. We use Protobuf (Protobuf) to serialize the groups as well as the labels. However, it is key that every group (and the metadata) be serialized as a separate message, as Protobuf can rearrange the contents within a message, and thus can rearrange the ordering of the groups themselves. We finally concatenate the contents of the messages and write them out as one file. As shown in Appendix A.5, any record format conversion can be expensive; PCRs benefit from requiring only a single conversion for multiple tasks. Decoding. To decode a PCR file, one has to first lookup the file’s scan group offsets in the database. The offsets provide sufficient information to do a partial read of the file (e.g., instead of reading the entire file, we read only enough bytes to read up to the desired scan group). Decoding the JPEGs requires inverting the PCR scan-group grouping process for the available scan-groups prior to JPEG decode. Since we are missing scan-groups, we terminate the byte stream with an End-of-Image (EOI) JPEG token—this technique allows most JPEG decoders to render the byte stream with only the available subset of scans. The bulk of the inverse conversion is done in 150 lines of C++ code. Loader. We implemented PCR loaders using PyTorch’s dataloader as well as DALI (NVIDIA, 2018)’s ExternalSource operator to return batches of images at a configurable fidelity (with the corresponding labels). We find that a pipeline abstraction simplifies loader design, since recordbased datasets can be easily iterated sequentially. In contrast, the PyTorch Dataloader abstrac- tion, which assumes that we can index randomly into an in-memory data structure (e.g., i = RandInt(0, n); (x, y) = data[i];), is harder to use for constantly fetching record formats off disk. Our implementation, while being only several hundred lines of code, obtains image rates that are competitive (e.g., faster/slower depending on number of scans) with the included DALI TFRecord loader, showing that PCRs can be implemented efficiently (i.e., fast enough to rarely bottleneck data loading) with a low amount of engineering effort. 4 EXPERIMENTS This section presents our evaluation of PCRs using a suite of large-scale image datasets. As large images are more taxing to a system’s network and storage, our evaluation focuses on datasets with high-resolution images. We describe our experimental setup in Section 4.1. We present our evaluation results in Section 4.2, showing that halving data bandwidth per image results in comparable accuracy but with half the training time. In Section 4.3, we analyze the intuitive relationship between objective measures of image fidelity and time-to-accuracy. Finally, in Section 4.4, we present results that trace the training time speedups to the data loading times themselves. 4.1 EVALUATION SETUP Our evaluation uses the ImageNet ILSVRC (Deng et al., 2009; Russakovsky et al., 2015), HAM10000 (Tschandl et al., 2018), Stanford Cars (Krause et al., 2013), and CelebA-HQ (Karras et al., 2018) datasets, which are described below. See Appendix A.4 for additional details. Datasets. • ImageNet-100 ImageNet provides a wide diversity of classes, of which we focus on the first 100 to make training times more tractable. Since classes are roughly ordered by ImageNet categories, this results in a fine-grained, i.e., hard to classify, multiclass task. We convert the dataset into PCRs in batches of 1024, which results in 126 PCRs. We use the full ImageNet dataset in Appendix A.7. • HAM10000 We split the HAM10000 dataset randomly 80%/20% between train and test. We convert the dataset into PCRs in batches of 64, which results in 125 PCRs of similar size as the ones used for ImageNet-100. • Stanford Cars The Stanford Cars dataset is another fine-grained classification dataset, since all images are cars, and there are 196 classes spread over 16k images. We believe this dataset highlights some of the worst-case training scenarios, as it is considerably easier to predict highly compressed variants of unrelated images by exploiting low frequency image statistics (e.g., planes vs. frogs). We explore a coarse-grained version of Cars in Appendix A.6. Cars has 63 PCRs. • CelebAHQ-Smile CelebA-HQ is a high-resolution derivative of the CelebA dataset (Liu et al., 2015), which consists of 30k celebrity faces at 10242. We use the annotations provided by CelebA to construct a smiling or not smiling dataset. We split the 30k dataset into 80%/20% train/test, and we convert the training set into 93 PCRs. All datasets utilize resizing, crop, and horizontal-flip augmentations, as is standard for ImageNet training. We provide examples of scan groups for these datasets in Appendix A.8. Training Regime. We use pretrained ImageNet weights for HAM10000 and Cars due to the limited amount of training data. We use standard ImageNet training, starting the learning rate at 0.1 (with gradual warmup (Goyal et al., 2017)) and dropping it on epoch 30 and 60 by 10×. After augmentations, all inputs are of size 224× 224. The pretrained experiments (HAM10000 and Cars) start at a learning rate of 0.01 to avoid changing the initialization too aggressively. We use fp16 training (Micikevicius et al., 2018) as it results in an additional 10% images per second (see Appendix A.3). We use a ResNet18 (He et al., 2016) and ShuffleNetv2 (Ma et al., 2018) architecture for our experiments with a batch size of 128 per each worker. We run each experiment at least 3 times to obtain confidence intervals given different random seeds and sources of non-determinism such as multi-threading and I/O. System Setup. We run distributed experiments on a 16-node Ceph (Weil et al., 2006) cluster connected with a Cisco Nexus 3264-Q 64-port QSFP+ 40GbE switch. Each node has a 16– core Intel E5–2698Bv3 Xeon 2GHz CPU, 64GiB RAM, NVIDIA TitanX, 4TB 7200RPM Seagate ST4000NM0023 HDD, and a Mellanox MCX314A-BCCT 40GbE NIC. All nodes run Linux kernel 4.15 on Ubuntu 18.04, CUDA10, and the Luminous release (v12.2.12) of Ceph. We use six of the nodes as Ceph nodes; five nodes are dedicated as storage nodes in the form of Object Storage Devices (OSDs), and one node is used as a Ceph metadata server (MDS). The remaining 10 nodes are used as machine learning workers for the training process. This means there is a 2:1 ratio between compute and storage nodes. We use PyTorch (Paszke et al., 2017) (v1.12) with NVIDIA Apex (Apex) (v0.1) and NVIDIA DALI (NVIDIA, 2018) (v0.14.0). We use at least four worker threads to prefetch data in the loader. While we focus on this particular distributed setting, we observe similar time-to-accuracy gains on a single machine with eight GPUs sharing the same disk, and we believe the results will generalize to different setups. 4.2 TIME TO ACCURACY The time-to-accuracy results for ResNet18 training are presented in Figure 4, while those of ShuffleNetv2 are presented in Figure 6. See Appendix A.2 for a tabular view and Appendix A.1 for the corresponding training loss results. All scan groups within a dataset were run for the same amount of epochs, so lower scan groups finish earlier. 90 epochs are shown for ImageNet, 150 epochs are shown for HAM10000, 250 epochs are shown for Stanford Cars, and 90 epochs are shown for CelebAHQ-Smile. We sample the test accuracy every 15 epochs for non-ImageNet datasets to reduce interference with training measurements. To avoid measuring implementation differences with other loaders, our evaluation focuses on the differences obtained by reading various amounts of scan groups. Reading all the data (up to scan group 10) is the baseline. First, we note that for all datasets, except for Cars, PCRs provide a 2× boost to time-to-accuracy compared to the baseline. The reason for this speedup is that lower scan groups are smaller. As shown in Figure 5, scan group 5 is roughly half the size of the baseline, and scan group 1 is a fifth of scan group 5 (i.e., a potential 10× bandwidth savings). This trend holds across datasets (see Appendix A.1). As we will discuss in Section 4.4, the space savings manifest in reduced dataloader latencies. Second, we note that there is an inherent trade-off between convergence quality and the speedup attained by using less storage resources. In general, although lower fidelity scan groups allow the system to operate more efficiently, they do so at the expense of model convergence. Scan group 1, the lowest fidelity scan, performs poorly, especially on Cars, where fine-grained details are important. Scan groups limit the maximum achievable accuracy on a task; if learning plateaus prematurely, applications should raise the scan group in a manner similar to dropping the learning rate. Third, the relative rankings of scan groups are relatively stable across models and datasets, which reduces tuning efforts in choosing the appropriate scan group. We further relate these rankings to the fidelity of the scan groups in Section 4.3. Our conclusion is that, for most datasets, scan group 5 costs half as much in terms of bandwidth, but reaches the same level of test accuracy as the baseline—thus, it is a good default. This is most apparent for ImageNet and HAM10000, which are challenging enough for small variations in image fidelity to make a commensurate difference in test accuracy. In contrast, Cars is too fine-grained to allow images to be degraded, and CelebAHQ-Smile is too coarse-grained for image degradation to matter. 4.3 THE RELATIONSHIP BETWEEN IMAGE FIDELITY AND TEST ACCURACY We use MSSIM (Wang et al., 2003), a standard measure of image similarity, to compare how various scans approximate the reference image, and we show the results in Figure 7. We find that there is a strong connection between MSSIM and the resulting final test accuracy, especially when comparing scan groups within a task. Our preliminary tests demonstrate that scan groups that have very similar MSSIM perform very similarly, which is why only groups 1, 2, 5, and the baseline are shown. Due to the way progressive JPEG is coded by default, groups tend to cluster (e.g., 2, 3, and 4 are usually similar, while 5 introduces a difference). We note that MSSIM being poor (e.g., scan group 1 for cars) or MSSIM being close to baseline (e.g., scan group 5 for HAM10000) are good predictors of relative test accuracy within tasks. MSSIM can therefore be used as a diagnostic for choosing scans. 4.4 THE RELATIONSHIP BETWEEN SCANS AND DATA STALLS The datasets we evaluated show that data loading can slow down the training process. To highlight these slowdowns, and the improvements PCRs achieve by not using all scan groups, we present the loading time of data for the ResNet18 ImageNet-100 run in Figure 8. We obtain similar results for the other datasets. The baseline of using all scan group results in high periodic loading stalls, where the prefetching queue was drained. Upon blocking, training cannot proceed until the worker threads obtain a full batch of data. Periods of (mostly) no stalls are caused by both threads pre-fetching the data and single records servicing multiple minibatches. Using fewer scan groups reduces the amount of data read, which results in lower magnitude stalls. We observe these stalls with both DALI and PyTorch loaders. 5 RELATED WORK Training Over Large Datasets. Training with massive amounts of parallelism (thus stressing system bandwidth) while achieving near-linear speedup has been the focus of previous work, and it highlights a practical need for efficient data pipelines at scale. A common objective is training mod- els over ImageNet in a record amount of time (Goyal et al., 2017; You et al., 2018; Jia et al., 2018; Ying et al., 2018; Yamazaki et al., 2019). This line of work, while demonstrating immense bandwidth needs, typically keeps data in memory, avoiding storage problems altogether. Recently, the high performance computing community has become interested in training models at massive scale (27k GPUs) (Kurth et al., 2018). Since each GPU matches a disk in bandwidth, the dataset was partitioned among the local memory/storage of the nodes, avoiding the distributed filesystem. Our work attempts to reduce the storage bottleneck altogether, such that anything from a couple disks to a distributed file system could service many GPUs. A separate line of work shows that I/O is a significant bottleneck for certain tasks and proposes optimizing I/O via a set of deep-learning specific optimization to LMDB (Pumma et al., 2019). In contrast, our focus is more on data representation, which is independent of the internals of the storage system. Production systems such as TFX (Baylor et al., 2017) have used custom Protobuf parsers to get 2–5× speedups for simple (e.g., linear) models; these techniques are complementary to ours and reduce loader computational overheads. Dataset Reduction Techniques. The availability of larger datasets has spawned interest in learning algorithms that guaranteed both “good” model accuracy and lower computational complexity. Data reduction techniques, such as sketching, coresets, clustering, and sampling, have been used to reduce the size of a training set (Karnin & Liberty, 2019; Feldman et al., 2013; Liberty, 2013; Woodruff, 2014; Daniely et al., 2017; Kabkab et al., 2016; Bachem et al., 2017). A different approach is to use the unaltered training set, but reduce the size of the active training set to reduce bandwidth requirements (Matsushima et al., 2012). In contrast, we modify the data representation and layout to be more efficient across a wide variety of models. Compression. Finally, the reduction of data size via compression methods is ubiquitous across computer systems. To avoid costly model transmission/storage, prior work compressed neural network models (Han et al., 2016b;a; 2015; Cheng et al., 2017; Xu et al., 2018; Hwang & Sung, 2014; Anwar et al., 2015; Denton et al., 2014). Similarly, dataset distillation (Wang et al., 2018) compresses a model’s parameters into a few training examples. Our work attempts to compress data for training, and not the network itself. Prior work has looked into optimizing training systems by compressing neural network training network traffic (Lim et al., 2019; Alistarh et al., 2017; Lin et al., 2018; Wen et al., 2017; Wangni et al., 2018; Zhang et al., 2017). This trend is not specific to machine learning; prior work in databases, computer memories, and the web used compression to reduce system bandwidth requirements (Zukowski et al., 2006; Abadi et al., 2006; Pekhimenko et al., 2018; 2012; Yan et al., 2017; Agababov et al., 2015). Our work focuses on bandwidth for ML data pipelines by utilizing the compression robustness found in most models. Other work modifies models to be able to directly train on compressed representations for the purpose of avoiding decoding or reducing model complexity (Gueguen et al., 2018; Torfason et al., 2018; Fu & Guimaraes, 2016; Ulicny & Dahyot, 2017). Our work differs in motivation, as we do not focus on model computation or make modifications to the models. Previous work has investigated how image degradation (e.g., JPEG artifacts) affect inference (Dodge & Karam, 2016; Vasiljevic et al., 2016; Peng et al., 2016; Zheng et al., 2016); in contrast, our work focuses on the effects of compression on training. 6 CONCLUSION To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems. Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats. We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy. PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches. PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically. While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation. A APPENDIX A.1 LOSS, SPACE SAVINGS, AND ACCURACY PER EPOCH Below, we provide additional experiment plots that were omitted in the main text. Figure 9 and Figure 10 contain the loss over time for the ResNet-18 and ShuffleNetv2 experiments shown in Section 4. Figure 11 extends Figure 5 to show the scan sizes for all datasets. It is worth noting that Top-5 accuracies mirror the Top-1 accuracies trends for ImageNet and Cars. To measure the effect of compression without accounting for time, we show accuracy vs. epoch plots in Figure 12 and Figure 13. While compression can itself be viewed as a data augmentation (e.g., removing high frequency features that can possibly cause overfitting), we notice that it does not usually improve accuracy. Rather, most of the gains in time-to-accuracy are from faster image rates. 0 2000 4000 6000 8000 Time (s) 100 6 × 10 1 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 1600 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 9: Training loss with ResNet-18. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1000 2000 3000 4000 5000 6000 7000 8000 Time (s) 100 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) ImageNet-100 0 200 400 600 800 1000 1200 1400 Time (s) 100 4 × 10 1 6 × 10 1 2 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) HAM10000 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Stanford Cars 0 200 400 600 800 1000 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) CelebAHQ-Smile Figure 10: Training loss with ShuffleNetv2. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Si ze (B yt es ) 1e8 (a) ImageNet-100 0 1 2 3 4 5 6 7 8 9 10 Scan 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Si ze (B yt es ) 1e7 (b) HAM10000 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (c) Stanford Cars 0 1 2 3 4 5 6 7 8 9 10 Scan 0.0 0.5 1.0 1.5 2.0 2.5 Si ze (B yt es ) 1e7 (d) CelebAHQ-Smile Figure 11: The size in bytes of various levels of scans read. Scan group 0 is shown, which contains only labels and is typically ∼100 bytes. Each scan adds roughly a constant amount of data (i.e., linear scaling), although certain scans add considerably more than others (i.e., sizes sometimes cluster) due to techniques like chroma subsampling. Using all 10 scans can require over an order of magnitude more bandwidth than 1–2 scans. A.2 TIME TO CONVERGENCE TABLE We provide a table of time-to-accuracy in Table 1 to help with reading Figure 4 and Figure 6. For Stanford Cars, low numbers of scans do reach accuracies faster than the baseline, but there is a noticeable drop in accuracy. This issue of achieving comparable accuracy for the Cars dataset is further explored in Appendix A.6. A.3 EXPERIMENT SETUP Below we describe details of how the experiments were run, such as hardware characteristics and software configurations. Benchmark Cluster Speeds. As noted in the main text, we utilize a NVIDIA TitanX Graphics Processing Unit (GPU) on each node for the model training. This GPU allows us to train (with FP32/FP16) ResNet-18 at 405/445 images per second and ShuffleNetv2 at 760/750 images per second. With a cached, decoded dataset of 224 × 224 resolution images, we achieve a clusterwide 3625/4050 images per second for ResNet-18 and 6975/7075 images per second for ShuffleNetv2. ImageNet images are around 110kB on average; with 10 GPUs, the cluster can consume 445 megabytes/s (ResNet-18) and 775 megabytes/s (ShuffleNetv2) of storage system bandwidth. GPUs continue to get faster over time, and faster GPUs (or other accelerators) have higher I/O bandwidth demands. Decoding Overhead. Progressive compression has some computational overhead associated with decompression compared to baseline formats. This overhead can grow with the number of scans, and, thus, users of PCRs may be concerned about the trade-offs between decoding overheads and bandwidth savings. First, we note that PCRs can use a large number of scans (e.g., hundreds), but, in practice, useful behavior is observed using only 10 scans (of which we only use 4). Second, the decoding overhead is often a favorable trade-off compared to a storage bottleneck, if one exists. To test this, we a Python microbenchmark that stores a subset of ImageNet data in memory and uses the PIL and OpenCV libraries for decoding. For PIL, we process 230 baseline images per second and 150 progressive images per second. For OpenCV, we process 225 baseline images per second and 165 progressive images per second. Thus, progressive compression with 10 scans adds only around 40–50% computational expense over baseline formats for common implementations. This speed, combined with additional optimizations such as multi-core parallelism (e.g., we would expect 4× these rates with 4 cores), suggests that while decoding can be an issue, the penalty from using progressive images can be managed more easily than a storage bottleneck (i.e., compute can usually be traded for storage bandwidth). Further, some of the decoding can actually be moved to an accelerator, like the GPU used for training, something which is already available via nvJPEG2. Reducing this computational expense by optimizing the implementation or reducing the amount of scans (since our experiments only use 4 distinct scans) is left as future work. Image Loading Rates. We provide image loading rates observed during training in Table 2. Using more scans slows down training significantly, as can be seen in the image rates. It is worth noting that these rates vary considerably during runtime (due to stalls), and ShuffleNetv2 is capable of a higher maximum training rate than ResNet-18. Further, as the number of scans is reduced, image rates approach the maximum achievable by the cluster for each model. A.4 DATASET DETAILS Below we describe the characteristics of the used datasets. ImageNet-100 Creation. The ImageNet-100 dataset was constructed by subsampling 100 classes out of the 1000 classes found in the ImageNet ILSVRC dataset (Deng et al., 2009; Russakovsky et al., 2015). These classes were chosen arbitrarily to limit computation time—they are the first 100 classes of ImageNet in sorted directory listing form i.e., n01440764–n01855672. CelebAHQ-Smile Creation. The CelebAHQ dataset (Karras et al., 2018) was created as a high quality version of the CelebA dataset (Liu et al., 2015). CelebA contains attributes for each face, such as whether the face is smiling or not. CelebAHQ-Smile utilizes these attributes to construct a dataset of 30k faces, where each face is assigned a binary variable for smiling or not. While the CelebA dataset was subsampled to construct CelebAHQ, we do not subsample CelebAHQ further (i.e., we use all 30k images it contains). Record and Image Quality Details. We provide the dataset size details for the encoded datasets in Table 3. As the original (e.g., lossless) images are hard to find, we estimate the JPEQ qual- 2https://developer.nvidia.com/nvjpeg ity setting of the training set with ImageMagick using identify -format ’%Q’. The JPEG quality setting determines the level of frequency quantization outlined in Figure 2. Intuitively, one would expect that higher quality JPEG images could allow more aggressive PCR compression rates for a fixed resolution, since each image has more redundant information on average. ImageNet and HAM10000 both have high quality images. CelebAHQ has lower quality images, but they are downscaled to 256×256 for training purposes, which increases the information density in the image (e.g., blurry images can be made to appear less blurry by downsampling), a fact exploited in prior work (Yan et al., 2017). Cars is neither high JPEG quality or large resolution. Under-compressing images (perhaps at high resolution) during the initial JPEG compression may allow for a larger range of viable scan groups. A.5 RECORD FORMAT CONVERSION TIMES We provide bandwidth-optimized record baselines in Figure 14, where we re-encode the images using a statically-chosen level of compression. These baselines re-encode the images with 50% quality and 90% JPEG quality, respectively, to reduce dataset size at a fixed level of fidelity. It is worth noting that re-encoding images compounds with the original JPEG compression, so the re-encoded image quality may be lower than 50% or 90% quality compared to the images in their original lossless form. This is in contrast to PCRs, which losslessly convert the images into a progressive format, which allows dynamic access to the level of fidelity. We observe that both the baseline method of dataset bandwidth reduction and the PCR method can take considerable encoding time, since the encoding time scales proportionally to the dataset size. We also observe that the PCR method is competitive (1.15× to 2.98×) to that of the baseline in terms of encoding time. PCRs avoid having to re-encode a dataset at multiple fidelity levels, and, therefore, they can save both storage space and encoding time. Converting the full ImageNet into record format takes roughly 16× longer than the 6 minutes needed for the 10× smaller subsampled dataset—the PCR conversion is 96 minutes (53 minutes are spent in JPEG conversion). One reason for this additional slowdown is that any system caches (e.g., in the distributed filesystem or the file cache on the converter node) are less likely to see a cache hit due to the working set size being larger. Although the exact conversion times are dependent on implementation, hardware, and the dataset, conversions times can be in the range of one hour of compute time per 100 GB. A.6 COARSE GRAINED VS. FINE GRAINED CARS EXPERIMENTS We provide experiments validating that compression needs vary within the same dataset for different tasks in Figure 15 and Figure 16, which show accuracy and loss, respectively. This experiment simply coarsens the granularity of the classification task, and demonstrates that lower scan groups can be used for tasks which are easier. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. We can see that compared to the original task, the coarse tasks reduce the gap between scan groups, and the binary task closes the gap even more. This suggests that as the tasks get easier, the tolerance for lower scan groups grows. Simply re-assigning the class labels to a coarser class reduces the complexity of the task and closes the accuracy gap across scan groups. A fixed PCR record encoding (i.e., without re-encoding) can support multiple tasks at the optimal quality, whereas static approaches may need one encoding per task. Some training methodologies, such as Progressive GAN training (Karras et al., 2018), utilize different dataset qualities over the course of training (e.g., training with a course-to-fine quality schedule), and, thus, a single training session may consume dozens of distinct dataset qualities. A.7 IMAGENET-1000 RESULTS We provide the full ImageNet (i.e., 1000 classes) results with ResNet-18 and ShuffeNetv2 in Figure 17. Only group 5 and the baseline are shown, since lower group numbers have difficulty achieving baseline accuracy parity. The results show that PCRs can speed up training by a factor of 2 while retaining accuracy even with large scale (i.e., over 1 million samples) training. 0 200 400 600 800 1000 1200 1400 Time (s) 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (a) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (b) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (c) Is-Corvette 0 200 400 600 800 1000 1200 1400 Time (s) 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (d) Baseline 0 200 400 600 800 1000 1200 Time (s) 100 2 × 100 3 × 100 4 × 100 6 × 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (e) Make-Only 0 200 400 600 800 1000 1200 Time (s) 100 Tr ai n Lo g Lo ss Scan Group Group_1 Group_2 Group_5 Baseline (f) Is-Corvette Figure 16: Training loss with ResNet-18 (top) and ShuffleNetv2 (bottom) on a coarser version of the Stanford Cars dataset. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. The gap between scan groups closes as the task is made more simple. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown. A.8 IMAGE EXAMPLES BY SCAN We provide image examples from each dataset that illustrate each scan group in Figure 18. Reading more scans, and, thus, data, from a progressive image results in higher fidelity images, but there are diminishing returns. Images can use a remarkably low amount of scan groups without impacting visual quality, which manifests in bandwidth savings if used accordingly. 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (a) ResNet-18 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 70 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (b) ResNet-18 Test Accuracy 0 1000020000300004000050000600007000080000 Time (s) 2 × 100 3 × 100 Tr ai n Lo g Lo ss Scan Group Group_5 Baseline (c) ShuffleNetv2 Train Loss 0 1000020000300004000050000600007000080000 Time (s) 10 20 30 40 50 60 To p- 1 Te st A cc ur ac y Scan Group Group_5 Baseline (d) ShuffleNetv2 Test Accuracy Figure 17: Training loss and test accuracy with ResNet-18 (top) and ShuffleNetv2 (bottom) on the 1000 class ImageNet Dataset. Time is the x-axis (seconds) and is relative to first epoch. 95% confidence intervals are shown.
1. What is the focus and contribution of the paper on image dataset storage? 2. What are the strengths of the proposed approach, particularly its simplicity and potential effectiveness? 3. What are the limitations of the paper's experiments regarding their representativeness for real machine learning tasks? 4. How does the reviewer assess the clarity and reliability of the paper's content? 5. Are there any alternative baselines that the paper could have discussed, such as subsampling pixels and storing incremental subsets of those pixels? 6. How does the reviewer evaluate the significance of the paper's findings regarding faster convergence, and what might be the factors contributing to it? 7. Would it be beneficial for the paper to address the applicability of the proposed method in parallel training settings, such as using SSD storage?
Review
Review Summary: This paper introduces a new storage format for image datasets for machine learning training. The core idea is to use progressive JPEG to create sequential scans of the input image, from lower resolution to higher resolution. The authors found that on some datasets, using half of the scans is already enough to reach similar accuracy but speeded up the convergence by a factor of 2. Detailed feedbacks: - The paper presents a simple idea that directly uses the nature of JPEG compression. The paper shows that it can work well and can be potentially integrated into real machine learning dataset storage applications. - Related work section is thorough. - The experiments are limited to image classifications, and some of the datasets are subsampled (e.g. ImageNet and CelebA). This may not well represent real machine learning tasks, and practitioners may be unsure about the reliability of the compression. The “Cars” dataset contains fine-grained classification, in which the proposed method is - Figure 1 is not very clear what is the key advantage of the proposed method, and what are the different mechanisms. - Alternatively, one can subsample the pixels and store incremental subsets of those pixels. It would be good if the paper can discuss about this baseline. - The data storage format is only loosely related to the main goal of the paper, which is to show that network can still train very well and even faster when receiving partial input data. Once they figured out the number of scans needed for this application, they don’t necessarily need to keep a full lossless version and can just go for a lossy version. In other words, the experiment section can be replaced by any other lossy compression by varying the compression ratio. - In my opinion, there could be two reasons for faster convergence. 1) lowered image quality makes the data easier to learn and 2) the smaller data size allows faster reading of data from disk. The paper only shows wall-clock speed-up, but it is unclear which factor is bigger. 2) can be potentially addressed by faster disk reading such as SSD or in-memory datasets. One of the motivations is to help parallel training of dataset and it is also mentioned how non-random sampling of data can hurt training performance. It would be good to showcase how the proposed method can help in those parallel training settings. Conclusion: This paper presents a simple and effective idea and can be potentially beneficial. However, my main concern is whether the experiments can be representative enough for large scale experiments (e.g. using non-subsampled ImageNet dataset with parallel training using SSD storage). Therefore, my overall rating is weak accept.
ICLR
Title A Loss Curvature Perspective on Training Instabilities of Deep Learning Models Abstract In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. N/A In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. 1 INTRODUCTION Optimization of neural networks can easily fail. While recent architectural advances such as skip connections (He et al., 2016a) and Batch Normalization (Ioffe and Szegedy, 2015) have been applied successfully to produce architectures and hyperparameters that reliably train well, even small changes to a trainable configuration can easily result in training that diverges. More generally, producing a configuration that strikes the right balance between stable training and rapid optimization progress on a new domain can be difficult—practitioners and researchers have few reliable heuristics to guide them through the process. As a result, the specific hyperparameter tuning protocol has an outsized influence on the results (Choi et al., 2019; Sivaprasad et al., 2020) and successes often rely on large hyperparameter searches (Nado et al., 2021). Developing a principled understanding of what makes general architectures trainable would allow researchers to more reliably navigate this process and has the potential to dramatically accelerate research into finding better, more scalable architectures. The focus of the empirical investigation of this work is to better understand what limits the maximum trainable learning rate for deep learning models trained with the typical minibatch stochastic gradient descent (SGD) family algorithms. As part of this investigation, we examine several methods developed by the deep learning community that have enabled training at larger learning rates and improved performance. Many methods have been developed that can achieve this goal, notably normalization, learning rate warmup, gradient clipping (Pascanu et al., 2013), and better model initializations such as Fixup (Zhang et al., 2019b), MetaInit (Dauphin and Schoenholz, 2019), and GradInit (Zhu et al., 2021). While these methods are certainly not exactly equivalent, a key property they all have in common is that they can enable training at larger learning rates when applied to certain models (see for example Figure 1). ∗Equal Contribution. Correspondence to {gilmer, ghorbani}@google.com. A natural hypothesis is that methods which enable training at larger learning rates do so through reducing the sharpness1 of the loss surface during training. Indeed, this hypothesis has already been proposed as one of the beneficial effects of Batch Normalization (Ghorbani et al., 2019; Santurkar et al., 2018) and residual connections (Li et al., 2017), and quadratic models of the loss surface predict that optimization with SGD is unstable when λ1 > 2/η (Wu et al., 2018). However, recent empirical investigations into the relevance of quadratic stability bounds to neural network training have either focused on smaller models, focused on full batch training at small learning rates, and do not investigate connections between sharpness, model initialization and learning rate warmup (Cohen et al., 2021; Jastrzebski et al., 2020). In this work, we design a series of large scale experiments studying the evolution of the loss sharpness as we vary the learning rate, warmup period, initialization, and architectural choices. Our results demonstrate the central role that λ1 plays in neural network optimization—maintaining sufficiently small λ1 during optimization is a necessary condition for successful training at large learning rates. Consequently, reducing λ1 is a primary benefit of proper tuning of a number of architecture and optimization hyperparameters: including model initialization, location of normalization, and warmup schedule. Specifically, we show the following: • We provide large scale empirical confirmation that training of neural networks with SGD+momentum is stable only when the optimization trajectory primarily resides in a region of parameter space where λ1 . 2/η, where η denotes the learning rate. This corroborates the theoretical predictions of Wu et al. (2018) and recent empirical observations of Jastrzebski et al. (2020) and Cohen et al. (2021). • We demonstrate that several successful initialization strategies for architectures without normalization operate primarily by reducing curvature early in training, enabling training at larger learning rates. • We show that learning rate warmup gradually reduces λ1 during training, offering similar benefits to better model initialization. We connect the mechanism by which warmup operates to the dynamical stability model of Wu et al. (2018). • We show that learning rate warmup is a simple yet competitive baseline for research into better model initialization. We demonstrate that key progress in this area (Dauphin and Schoenholz, 2019; Zhang et al., 2019b; Zhu et al., 2021) can be matched by the application of learning rate warmup and/or gradient clipping alone. 1Throughout this work we will use the term sharpness to refer to the maximum eigenvalue of the loss Hessian, denoted as λ1. See Appendix B for more details. • Finally, we show that large loss curvature can result in poor scaling at large batch sizes and interventions designed to improve loss conditioning can drastically improve the model’s ability to leverage data parallelism. 2 RELATED WORK Understanding BatchNorm The loss Hessian has been a central object of study for understanding optimization of neural networks. Santurkar et al. (2018) argues that an important benefit of Batch Normalization is improved smoothness of the loss surface, while Lewkowycz et al. (2020) notes that this is improved smoothness is only observed when higher learning rates are used in combination with Batch Normalization. Our results are generally consistent with this current understanding of Batch Normalization, however some of our experiments provide additional nuance—notably we observe several instances where models suffer from training instability (and high loss curvature) early in training despite using Batch Normalization (see Section 4). Evolution of the loss Hessian Recent research has closely studied the interaction between sharpness and learning rate. Wu et al. (2018) provides a dynamical stability model which predicts that the loss curvature at convergence must satisfy2 λ1 ≤ 2/η. Recent work has provided empirical evidence that λ1 . 2/η often holds well before convergence (Cohen et al., 2021; Jastrzebski et al., 2020). Cohen et al. (2021) focused on full batch training at small learning rates, and observed “progressive sharpening”, where λ1 increases during training until λ1 ≈ 2.0/η. We observe the progressive sharpening phenomenon also occurs for many models trained with SGD, though we do not investigate batch sizes ≤ 8, where Cohen et al. (2021) argue that progressive sharpening does not occur. We note that Wu et al. (2018) equation 8 predicts the “edge of stability” is dependent on the batch size and that at small batch sizes this can be will below the 2/η bound. We confirm this prediction holds even early in training (see Appendix Figure 16). Lewkowycz et al. (2020) proves that for single hidden layer neural networks initialized at point with λ1 > 2.0/η and trained with an MSE loss may enter a “catapult” regime—where the loss increases early until a flatter region of the loss surface is found, with divergence occurring in cases where λ1 greatly exceeds 2.0/η. In contrast to the simplified setting considered in Lewkowycz et al. (2020), we find that divergence may occur even though λ1 2/η at initialization. 3 EXPERIMENTAL SETUP We investigate models trained on several benchmarks: CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) for image classification, LM1B (Chelba et al., 2013) for Language Modeling, and WMT for Neural Machine Translation (NMT). On CIFAR-10 we consider the WideResnet (Zagoruyko and Komodakis, 2016) and DenseNet (Huang et al., 2017) architectures, both with and without Batch Normalization. We consider two variants of the DenseNet architecture. The standard variant from the open sourced code of Zhu et al. (2021) is considered in Figure 5 and Table 1. A less stable variant changes the strides in the average pooling layers to (1,1) is used for Figure 2 and is denoted as Stride-(1,1) DenseNet (see Appendix D.1 for a more detailed discussion). When training without Batch Normalization we consider several initialization strategies including the default “LeCun Normal” initialization, and running MetaInit. As a way to artificially induce worse initializations, we also consider experiments where we scale every variable produced by the default initialization by a constant factor α. The NMT models are trained on the WMT’16 EN-DE training set, tuned for hyper-parameters on the WMT’16 EN-DE validation set and evaluated on the WMT’14 EN-DE test set for BLEU scores. For NMT and LM1B Language Modeling, we train 6 layer Transformer models (Vaswani et al., 2017). Inspired from Xiong et al. (2020), we experiment with three Layer Norm settings: pre-Layer Norm, post-Layer Norm (Liu et al., 2020) and no Layer Norm for the transformer models. Each model is trained with various learning rates using cosine decay (unless mentioned explicitly). For warmup experiments we use linear warmup which starts at 0 and scales linearly to a max value η before applying cosine decay. To measure the max eigenvalue of the loss Hessian we use the 2This is a simplified, potentially loose bound. See the original work for a more general bound that depends on both the loss curvature and the noise covariance matrix. Lanczos method where the number of iterations varied as needed depending on the architecture (details provided in the appendix). 4 EARLY TRAINING INSTABILITY AND THE LOSS HESSIAN In Figure 2 we plot the curvature at initialization and during training for a series of models trained on different datasets (plots showing final performance for all models can be found in the appendix). Each row indicates a different base model, the left column plots the curvature of the model at initialization and indicates with an ‘X’ whether or not the model diverges when trained without warmup. On the right we plot the measured curvature and learning rate at a specified point during training. We observe across all datasets that successful training occurs only when optimization enters a region of parameter space where λ1 ≤ 2/η, and that divergent models are outside this region shortly before divergence. At initialization, some models can be successfully trained even when they start out in the unstable region and generally speaking, divergence is more likely for models deeper in the unstable region. For CIFAR-10 WideResnet, removing batch norm results in a model with higher curvature at initialization and results in divergent models when trained with a learning rate η > .1. Scaling the WideResnet initialization up by a factor of 1.5 exacerbates the problem, resulting in even higher curvature at initialization and divergence when η > 10−2. MetaInit starts the model out at a point with very small λ1, and allows training without Batch Normalization at higher learning rates than the default initialization. We also observed that higher learning rates can be unlocked when the models are trained with learning rate warmup. Warmup was particularly effective for models which exhibit large λ1 either at initialization or early in training. Other models such as the post activation Resnet-50, and WideResnet w/ Batch Normalization did not benefit from warmup at the considered learning rates (see Appendix). For the Stride-(1,1) DenseNet experiments, it is noteworthy that the models with Batch Normalization actually start out with higher curvature than the non-BN variants. This is contrary to the generally accepted narrative that Batch Normalization improves the smoothness of the loss surface (Ghorbani et al., 2019; Santurkar et al., 2018). We found that the Batch Normalization models were more unstable than the non-BN variants here, as some models diverged at smaller learning rates. However, when combined with warmup the BN models were trainable at learning rates η > .1, whereas this did not hold for the non-BN variants, which diverge both with and without warmup at these learning rates. This result suggests that BN still offers training stability for this model, and flatter curvature mid training if trained with warmup and a higher learning rate, however no smoothness benefits are observed at initialization. See Appendix D.1 for more details on this phenomenon. For Resnet-50 trained on ImageNet we compare two different residual blocks: the preactivation block (He et al., 2016b) and the more commonly used post activation block (He et al., 2016a). For the preactivation block, we also consider flipping the order of the ReLU activation and batch normalization, as was considered in Brock et al. (2021). We find that both preactivation models start out in a region of higher curvature relative to the post activation variant, and that these models diverge when η > .5 whereas the post activation variant is trainable with learning rates as large at 10. Notably, there are several models in our experiments which diverge despite starting out in a region where λ1 < 2/η. This occurs for both the pre and post layernorm transformer, and the WideResnet model initialized with MetaInit. We found for these divergent models that the curvature rapidly increases in the initial steps of training, which is partially visible in the mid training plot where we plot the final observed curvature before divergence. Full training curves for these models can be found in the appendix. This implies that measuring λ1 at initialization is not always sufficient to predict whether or not the model will be easily trained. Currently, some architectural innovations are motivated by an analysis of either gradient statistics or smoothness at initialization (Liu et al., 2020)—a more robust analysis would consider the evolution of these statistics under SGD. 5 THE INTERACTION BETWEEN LEARNING RATE WARMUP, INITIALIZATION AND CURVATURE The success of learning rate warmup is inconsistent with conventional optimization wisdom, which traditionally suggests adapting the step size to the curvature (see for example the discussion around equation 2.4 in McCandlish et al. (2018)). However, with the understanding that λ1 is a dynamic quantity whose evolution is tightly coupled with the learning rate schedule, the benefits of a warmup period are more easily understood. We argue that the success of learning rate warmup follows naturally from two properties of training deep models: 1. Models diverge when the learning rate is too large relative to the 2/λ1 bound. 2. When the learning rate only slightly exceeds 2/λ1 optimization is unstable until the parameters move to a region with smaller λ1 (Wu et al., 2018; Lewkowycz et al., 2020). The first criteria implies that we can’t start η off at too large of a value relative to λ1 at initialization. The second criteria implies that gradually increasing η can gradually “push” the parameters to a region of parameter space where optimization is stable (with lower values of λ1). In Figure 4 there is clear evidence for this “pushing”, as during the warmup period the we see that λ1 ≈ 2.0/η holds for a large part of the warmup phase. Furthermore, this approximation holds even as we vary the length of the warmup period. Other examples can be seen in Figure 3 (B and F), and Figure 15 in the appendix. Warmup is not the only method capable of reducing λ1 during training, one can instead initialize the model in a region where λ1 starts off small. Consider for example, the points A, B and C in Figure 3. Each point shows optimization of a non-BN WideResnet with peak learning rate of .1. In (A) we see the model diverges within 3 steps without warmup using the default initialization. In (B) we see that a linear warmup period results in λ1 progressively decreasing until the peak step size of .1 is reached at step 1000, with no divergence occurring. Finally in (C) we initialize the same model with MetaInit, at which point λ1 is small at initialization, and the model can be trained at η = .1 without warmup. Similar to the aforementioned MetaInit, the success of related initialization strategies can be explained by reduced λ1 early in training. In Figure 5 (left) we look at the evolution of λ1 during the GradInit meta optimization process and compare this with simply training the same model using gradient clipping3. Both methods result in λ1 decreasing dramatically, after which λ1 hovers around 2/η. 3Similar to warmup, gradient clipping reduces the step size in regions of large curvature. Notably, GradInit starts regular training off at λ1 significantly below the 2/η bound, however the curvature quickly increases within a few steps. Given that initialization and warmup serve similar roles in reducing λ1, we expect to be able to achieve similar performance using the two methods. As shown in Table 1 we can easily match key advances in this field by applying learning rate warmup alone (see Appendix for experimental details). Beyond controlling λ1 mid-training, the learning rate η controls more general conditioning measures of the loss surface. For example, in Figure 5 we observe that even the MetaInit gradient quotient— the conditioning measure directly optimized by this initialization strategy—is controlled by η mid training. This again provides further evidence that the primary benefit of this initialization method is to reduce λ1 at initialization. As shown, any gains by optimizing the more general gradient quotient must be short lived as the initialization has no control over the long term value. 6 THE EFFECTS OF CURVATURE ON BATCH SIZE SCALING So far, we discussed how large loss curvature limits the range of stable learning rates for training. In this section, we highlight how these limits on usable learning rates affect the model’s ability to effectively leverage larger batch sizes. Previous research has studied the interplay of the loss curvature and batch size scaling from various different perspectives. Most notably, Shallue et al. (2018) observe that increasing the batch size yields consistent improvements in training speed until a (problem-dependent) critical batch size is reached; increasing the batch size beyond this threshold yields diminishing improvements in training speed. Zhang et al. (2019a) observe that a simple Noisy Quadratic Model (NQM) is able to capture some of the empirical behavior observed in Shallue et al. (2018). Similarly, McCandlish et al. (2018) use quadratic approximations to the loss to provide a closed form expression for the critical batch size as a function of the loss Hessian and the covariance of the stochastic gradient. We contribute to this literature by highlighting the role of λ1 in the batch size scaling behavior of the model. For this analysis, we focus on three of the WideResnet variants considered in Figure 2—the BatchNorm model (a low curvature model), the non BatchNorm model (with moderate curvature), and the non BatchNorm model with 1.5X init scaling (with high curvature). We train these models while sweeping both the learning rate and the batch size.4 We then measure the number of training steps required to reach 85% validation accuracy, and the optimal learning rate found for each batch size. Similar to Shallue et al. (2018), we normalize the plotted steps to 85% accuracy by the value measured at batch size 64. The results are shown in Figure 6. A few observations are in order: The low curvature model shows almost linear speedups in training speed as the batch size increases. In contrast, the high curvature model exhibits only minimal improvements in training speed with larger batch sizes. These scaling differences are closely mirrored by how the optimal learning rate η∗ changes with the batch size: for the low curvature model η∗ increases linearly with the batch size, while for the high curvature model η∗ is fixed around 3× 10−3. Notably, for the high curvature model η∗ is almost always the 4We sweep for the optimal learning rate on a log-scale grid between 10−3 and 1. For batch size, we sweep over powers of 2 from 16 to 4096. largest non-divergent value—a clear indication that the high loss curvature slows down training by preventing larger values from being used. A clear picture emerges from these observations. Previous research suggests that in order to effectively leverage larger batch sizes, one has to increase the learning rate in tandem with the batch size Jastrzębski et al. (2017); Goyal et al. (2017); Shallue et al. (2018); McCandlish et al. (2018). Our results suggest that large values of λ1 place a sharp limit on the maximum the learning rate possible and therefore, limit the model’s ability to leverage data parallelism effectively. 7 CONCLUSION Through extensive empirical experiments measuring the evolution of the loss sharpness during training, we have demonstrated how different methods such as initialization, learning rate warmup, and normalization all enable higher learning rates to be used (without causing divergence) by reducing λ1 during training. It is noteworthy that two of the most popular models we investigated (the popular post-activation variant of the Resnet-50 and the standard WideResnet 28-10) did not benefit from learning rate warmup, and exhibited small values of λ1 throughout training at the learning rates we considered. Thus researchers and practitioners who primarily work with well-optimized architectures might never notice a benefit from using warmup. However, even seemingly trivial modifications to a working architecture can easily result in large values of λ1 and thus instability early in training—a naive response to such a situation would be to dramatically reduce the learning rate or, even worse, abandon the modification being investigated all together. We hope the perspective presented in this work can help future researchers better navigate such situations, either through investigating different initializations, applying warmup and gradient clipping, or changing the location of normalization layers in the model. A LIMITATIONS Our analysis has focused primarily on models trained with SGD with momentum. This decision was motivated to reduce additional confounds that arise when using adaptive preconditioning. Notably, it is unclear what the analogue of λ1 ≤ 2/η should be for a model trained with Adam. In the appendix, we provide evidence that loss curvature adaption to the learning rate does occur even for Transformer models trained with Adam, and that learning rate warmup results in the similar effect of the optimization trajectory being “pushed” to flatter regions. However we leave a deeper analysis into this for future work. Finally, while our experiments certainly raise questions about the efficacy better model initialization has on further accelerating training, our measurements has focused primarily on the (lack of) influence initialization has on λ1 mid training. It is possible that better initializations could have lasting influence on the broader Hessian eigenspectrum (for example improving the ration λk/λ1 for smaller eigenvalues λk) and that our analysis is missing such an effect. B BRIEF REVIEW OF THE LOSS HESSIAN, EIGENVALUES AND QUADRATIC STABILITY BOUNDS For completeness, we include a formal definition of the fundamental mathematical quantities discussed in the paper. We derive most of this discussion from the relevant chapters in Horn and Johnson (2012) and Boyd et al. (2004). We refer the reader to these sources for a more detailed discussion. B.1 THE HESSIAN MATRIX The second derivative or Hessian matrix of the loss function L(·) at a point θ ∈ Rn is denoted by H(θ) ∈ Rn×n where ∀1 ≤ i, j ≤ n H(θ)i,j = ∂2L(θ) ∂θi∂θj . (1) Moreover, by Schwarz’s theorem, if the second partial derivatives of L(·) are continuous at θ, the matrix H(θ) is symmetric. This is a broad condition that holds for all the loss surfaces we examine in the main text (beyond a set of measure zero). B.2 EIGENVALUES Definition 1 Let A ∈ Rn×n. If a scalar λ and a nonzero vector x satisfy Ax = λx, λ ∈ C, x ∈ Cn, x 6= 0 (2) then λ is called an eigenvalue of A and x is called an eigenvector of A associated with λ. If A is a symmetric real matrix (such as the Hessian matrix), A can be factored as A = QΛQT , (3) where Q ∈ Rn×n is an orthogonal matrix and Λ = diag(λ1, . . . , λn) ∈ Rn×n is a real diagonal matrix. Here, {λi}ni=1 are all of the eigenvalues of A. We order λi such that λ1 ≥ λ2 · · · ≥ λn. In this ordering, λ1 corresponds to the maximum eigenvalue of A and λn corresponds to its minimum eigenvalue. The maximum and minimum eigenvalues of A satisfy the following important properties: λ1 = sup x6=0 xTAx xTx , λn = inf x 6=0 xTAx xTx . (4) In particular, for any x ∈ Rn, we have λn‖x‖22 ≤ xTAx ≤ λ1‖x‖22. B.3 STABILITY OF GRADIENT DESCENT FOR QUADRATIC LOSS Now that we have established the basics, let’s derive the stability condition for GD applied to a quadratic loss function. Note that Wu et al. (2018) and Cohen et al. (2021) provide more general bounds for the stability of SGD-type optimization algorithms. Here, we state & derive the stability condition for GD for the sake of completeness. Let L(θ) = 1 2 θTHθ, where H is a symmetric matrix with non-negative eigenvalues. Let’s consider GD dynamics starting from a random point θ0. Under GD with a fixed step-size η > 0, we have θt+1 = θt − η∇L(θt) = θt − ηHθt = (I − ηH)θt. Continuing this iteration to step 0 yields θt = (I − ηH)tθ0. (5) As t→∞, this iteration is stable iff5 the eigenvalues of (I − ηH) have absolute magnitude bounded by one, which can be stated equivalently as 1− ηλ1 ≥ −1⇐⇒ 2 ≥ λ1η ⇐⇒ 2 η ≥ λ1, which is the exact condition discussed and explored in the main text. 5As θ0 was randomly chosen, we assume it has a non-zero overlap with all eigenvectors. C MISCELLANEOUS FIGURES D PERFORMANCE OF MODELS IN FIGURE 2 In this this section, we plot the performance vs learning rate for all of the models shown in Figure 2 of the main text. These are shown in Figures 9, 13, 10, and 11. For models which diverged, we plot the best test performance achieved before divergence. In all settings, high curvature affects the final performance by limiting the use of higher learning rates. We also noted several models in Figure 2 which diverged despite training starting out in a stable region of parameter space. In Figure 12 we plot the evolution of the loss sharpness during training, showing that it quickly enters a region where λ1 > 2.0/η before diverging around step 90. D.1 DISCUSSION OF STRIDE-(1,1) DENSENET EXPERIMENTS In this section we discuss in more detail the Stride-(1,1) DenseNet experiments shown in Figure 2 in the main text. These experiments use a non-standard version of the DenseNet architecture where all average pooing strides are set to (1,1). Note the experiments in Figrue 5 and Table 1 instead use the standard strides implementation from the open sourced code of Zhu et al. (2021). The Stride-(1,1) DenseNet architecture is noteworthy because it is a counter example to common intuition that adding Batch Normalization results in flatter curvature. As shown in Figure 2 (left), the BN variants all have high curvature at initialization, however the right hand side plot shows that the mid training curvature becomes comparable to the non-BN variants. In Figure 13 we provide a more detailed analysis to understand what is happening with BN. First we plot the performance of the BN vs non-BN models both with and without warmup. The differences are striking. Without warmup, we see the BN performance is highly stochastic, some trials outperform the non-BN variants, while some trials underperform the non-BN variants. However, when trained with 1000 steps of warmup the BN variants now significantly outperform the non-BN models are all considered learning rates. They can even be successfully trained at higher learning rates than the non-BN variants, despite the high initial curvature. To provide further detail, we show the training curves of select individual runs, both the evolution of the training loss and the evolution of curvature. The BN variants all exhibit catapult behavior early—the loss increases initially until the parameters enter a region of flatter curvature. Warmup helps the BN variants, and significantly reduces the severity of the catapult phase while enabling faster long term training. Additionally, when we add warmup we find that the BN variants can now be trained at higher learning rates than the non-BN variant. As shown at learning rate of .22, the non-BN model diverges during the warmup phase despite lower initial curvature. Based on these experiments we arrive at the following conclusions. First, adding BN to the Stride(1,1) DenseNet architecture results in high curvature at initialization, which results in a short period of instability during training. However, once the parameters escape this region of large curvature, the BN variant exhibits favorable training dynamics relative to the non-BN variants. Thus, there still seems to be benefit to adding BN, assuming steps are taken to mitigate the initial period of high curvature. The fact that adding BN to a model can result in high initial curvature is not without precedent, as Yang et al. (2019) observe that adding BN to deep fully connected networks can result in exploding gradients at initialization. These experiments highlight one of the primary takeaways of this work: that maintaining flat curvature throughout training is a necessary (not sufficient) condition for stable training of neural networks. Thus it is not the presence of BN that is necessary for stable training, instead BN is generally a useful tool for reducing curvature (and thus stabilizing training). BN has clear benefits of improving curvature in most cases, but it is possible to produce configurations where adding BN paradoxically results in higher initial curvature than the non-BN variant. In these cases training is initially unstable, but once the curvature is reduced we see benefits later in training of using BN. E DETAILS ON COMPUTING THE HESSIAN EIGENSPECTRUM VIA LANCZOS We use Lanczos iterations to estimate the top eigenvalue of the Hessian. Lanczos algorithm only requires Hessian-vector products which can be efficiently computed via Pearlmutter’s trick Pearlmutter (1994). Previous research has demonstrated that this approach provides a robust and scalable framework to examine the eigenvalues of the Hessian for large neural networks Ghorbani et al. (2019); Papyan (2018). For our WMT / LM1B experiments, we run the algorithm for 45 steps while for image models, we use 40 steps. When monitoring the evolution of the top eigenvalue as a function of the number of Lanczos steps, in all cases except one, we observe that the algorithm converges. For the case of Resnet with ReLU→BN ordering, due to a very small eigengap between the top eigenvalue and the bulk, the convergence is significantly slower. We use 200 Lanczos steps in this case to alleviate the issue. For this model, estimating λ1 via power iteration (as is commonly done in the deep learning literature) will incorrectly the largest negative eigenvalue, not λ1 as desired. It is well-known that Lanczos algorithm can suffer from numerical instabilities caused by finiteprecision arithmetic. To alleviate these issues, Lanczos vectors are stored in float64 accuracy and we perform reorthogonalized at each step of the algorithm. F TRAINING DETAILS FOR TABLE 1 F.1 NEURAL MACHINE TRANSLATION Neural Machine Translation experiments are based on the Transformer models (Vaswani et al., 2017). We use separate embeddings on encoder and decoder, and a common word piece vocabulary of size 32000. For depth, we use 6 layers on both encoder and decoder. For width, we experiment with two models, namely Transformer-Base and Transformer-Wide. For Transformer-Base, we use word embeddings with 512 dimensions, 8 heads and 2048 feed-forward dimension. For TransformerWide we use word embeddings with 1024 dimensions, 16 heads and 4096 feed-forward dimension. The experiments reported in Figures 3 and 5 use Transformer-Base. The experiments reported in Table 1 use Transformer-Wide models trained with Adam (Kingma and Ba, 2014). We sweep over warm-up, learning rate, gradient clipping and init_scaling and optimize for validation loss to evaluate performance on test set BLEU reported in Table 1. All the models are trained for 60 epochs at batch size of 1024 for Transformer-Base models, and batch size of 512 for Transformer-Big models. We use dropout of 0.1, label smoothing of 0.1 and no weight decay for all these models. F.2 DENSENETS In Table 1 the ResNet-50 (w/o BN) architecture was trained for 100 epochs at batch size 512, with l2 regularization of 5e-5, dropout of .3. It was trained with SGD with nesterov momentum of .9 and learning rate of .2. We applied gradient clipping at global l2 norm of 5 and used linear learning rate warmup with warmup period of 1000 steps. For Table 1, the DenseNet-100 model was trained using the Gradinit codebase 6 by modifying the supplied DenseNet script to apply gradient clipping of norm 6 and to use the default initialization instead of GradInit. G TRAINING DETAILS FOR FIGURE 2 The WideResnet-28-10 models were trained with batch size of 1024 for 300 epochs. We applied the MixUp augmentation(Zhang et al., 2017). For learning rate warmup we used 1000 steps of linear warmup until the peak learning rate is achieved, at which point the learning rate is decayed according to the cosine schedule. The Stride-(1,1) DenseNet models were trained with batch size of 512 using the SGD optimizer with momentum of 0.9, weight decay of 5e-4, L2 regularization of 1e-4 and warmup of 1000 steps (for the models where warmup is used) followed by cosine decay. The models were trained for 200 epochs. For the DenseNet architecture we used growth_rate of 32 and reduction of 0.5. The Resnet-50 models were trained with batch size of 2048 using the SGD optimizer with nesterov momentum of .9. The learning rate schedule was the same as in the WideResnet case, with linear warmup of 1000 steps followed by cosine decay. We applied label smoothing of .1 and used the standard flip plus crop for data augmentation. The Transformer models on LM1B were trained at batch size 1024 using SGD with nesterov momentum of .9. We use embedding dimension of 512, 6 layers with 8 heads and MLP hidden dimension of 1024. The attention dropout rate was .1. The learning rate schedule followed the same recipe as in the Resnet cases. H CURVATURE ADAPTATION WITH THE ADAM OPTIMIZER The discussion in the main text focused primarily on models trained with SGD and momentum. In this appendix, we briefly examine if similar conclusions hold for optimizers such as Adam that use preconditioning. It is unclear a priori whether or not curvature adapation to the learning rate should occur for optimizers which apply preconditioning. However, given that Adam is a diagonal preconditioner applied to a non-diagonal Hessian, there may be some similar effects observed. 6https://github.com/zhuchen03/gradinit Consider a simple quadratic loss L(θ) = 1 2 θTHθ, H 0. where optimization is performed via preconditioned gradient descent with a fixed diagonal preconditioning matrix D: θt = θt−1 − ηD−1∇L(θt−1) = θt−1 − ηD−1 ( Hθt−1 ) = ( I − ηD−1H ) θt−1 = ( I − ηD−1H )t θ0 As such, this simple model would suggest that the max eigenvalue of the following matrix may be related to training instability of models trained with Adam λmax(D −1H) = λmax(D −1/2HD−1/2). (6) While (6) does not take into the account the effects of adaptive preconditioning or momentum, we find some empirical evidence that this approximation provides understanding into the stability of the optimization. Figure 14 below examines the evolution of λmax(D−1/2HD−1/2) for three Transformer models trained with Adam and different warm-up lengths. Here, D is a diagonal matrix with Di,i =√ Corrected Adam grad squared EMA + Adam . We observe that, similar to the models trained with momentum, the maximum (preconditioned) Hessian eigenvalue adapts to the warm-up schedule (green and red markers). We notice that –perhaps due to the effect of momentum or adaptive preconditioning – the threshold 2/η does not seem to be aligning with the data well. Instead, an empirically corrected threshold 40/η seems to fit the data better. We observe that instabilities in model training coincide exactly with λmax(D−1/2HD−1/2) crossing the empirically corrected threshold. These observations suggest that some of the insights discussed in the main text seem to carry over to the case of adaptive optimizers. We leave further exploration of this more complex setting to future work. I COMPUTE RESOURCES USED Nearly all experiments utilized the Google Cloud Platform with v2 cloud TPUs except for the following: The Figure 2 Resnet-50 and Stride-(1,1) Densenet experiments utilized the v3 cloud TPU, while the GradInit code was run on a cloud machine with a single V100 GPU. The Figure 2 experiments were done in parallel using up to 50 v2 TPUs concurrently over the period of a few days. Additionally, all the Machine Translation models were trained on v3 cloud TPUs.
1. What is the main contribution of the paper regarding understanding trainable architectures? 2. What are the key observations from the empirical studies on loss curvature and sharpness? 3. How does the paper relate to prior works such as Keskar et al. (ICLR 2017) and Jiang et al. (ICLR 2020)? 4. Do you have any concerns about the clarity and organization of the paper's content? 5. Are there any potential connections between the used sharpness/smoothness measurements and the generalization ability of the loss surface at the end of training?
Summary Of The Paper Review
Summary Of The Paper The paper aims to understand what makes general architectures trainable, specifically what limits the maximum learning rate for deep learning models trained with SGDM, from the loss curvature aspective. They empirically studied the evolution of the loss sharpness as they vary the learning rate, warmup period, initialization, and architectural choices. They show maintaining a sufficiently small \lambda_1 is a necessary condition for successful training at a large learning rate. And different methods such as initialization, learning rate warmup, and normalization all enable higher learning rates to be used by reducing \lambda_1 during training. More specifically: Some initialization strategies for architectures without normalization actually operate by reducing curvature early in training, enabling training at larger learning rates. learning rate warmup gradually reduces \lambda_1 during training, which is a competitive baseline in comparison with better model initialization methods. large loss curvature can result in poor scaling at large batch sizes and interventions designed to improve loss conditioning can improve the model’s ability to leverage data parallelism. Review The paper presents the necessity of low curvature at the early stage of training for stable neural network training with large learning rate, some notable observations includes “the mid-training conditioning is determined by the learning rate, not on the initialization method used”; “Learning rate warmup can match the performance of recent advances in initialization research”. It seems to suggest that the weight initialization does not matter that much and warmup can be mitigated. However, the paper is kind of hard to read as there are too many empirical observations presented but not quite well organized. For example, the most measurement \lamda_1 is is not clearly introduced and there is only a small footprint saying lambda_1 refers to maximum eigenvalue of the loss Hessian. The reason why 2/\eta is chosen is also not clear. The curvature and sharpness: the authors use the term sharpness to denote the max eigenvalues of loss Hessian to denote the curvature or the the loss surface, however, there are other sharpness/smoothness measurements such as [1, 2], which were used to denote the generalization ability of the loss surface at the end of training. It is easy to get confused with those terms for different conditions. Are there any connections? If so, why not study them as well? It would be great to know how those metrics are correlated with the curvature. If not, it would be better to make clear the difference and not use the terms interchangeably. The authors have shown that many techniques actually made low curvature. However, curvature itself can not reliably determine the trainability. As noted by the authors, some models can be successfully trained even when they start out in the unstable region and measuring \lamda_1 at initialization is not always sufficient to predict whether or not the model will be easily trained. On the other hand, the DenseNet experiments says that the models with Batch Normalization actually start out with higher curvature than the non-BN variants, which suggests that curvature may not be able to explain the effects of BN quite well and no smoothness benefits are observed at initialization. I would hope there are more investigations here as BN is still a critical module and the authors observation contradicts previous belief. [1] Keskar et al, On large-batch training for deep learning: Generalization gap and sharp minima. ICLR 2017 [1] Jiang et al, Fantastic Generalization Measures and Where to Find Them, ICLR 2020
ICLR
Title A Loss Curvature Perspective on Training Instabilities of Deep Learning Models Abstract In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. N/A In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. 1 INTRODUCTION Optimization of neural networks can easily fail. While recent architectural advances such as skip connections (He et al., 2016a) and Batch Normalization (Ioffe and Szegedy, 2015) have been applied successfully to produce architectures and hyperparameters that reliably train well, even small changes to a trainable configuration can easily result in training that diverges. More generally, producing a configuration that strikes the right balance between stable training and rapid optimization progress on a new domain can be difficult—practitioners and researchers have few reliable heuristics to guide them through the process. As a result, the specific hyperparameter tuning protocol has an outsized influence on the results (Choi et al., 2019; Sivaprasad et al., 2020) and successes often rely on large hyperparameter searches (Nado et al., 2021). Developing a principled understanding of what makes general architectures trainable would allow researchers to more reliably navigate this process and has the potential to dramatically accelerate research into finding better, more scalable architectures. The focus of the empirical investigation of this work is to better understand what limits the maximum trainable learning rate for deep learning models trained with the typical minibatch stochastic gradient descent (SGD) family algorithms. As part of this investigation, we examine several methods developed by the deep learning community that have enabled training at larger learning rates and improved performance. Many methods have been developed that can achieve this goal, notably normalization, learning rate warmup, gradient clipping (Pascanu et al., 2013), and better model initializations such as Fixup (Zhang et al., 2019b), MetaInit (Dauphin and Schoenholz, 2019), and GradInit (Zhu et al., 2021). While these methods are certainly not exactly equivalent, a key property they all have in common is that they can enable training at larger learning rates when applied to certain models (see for example Figure 1). ∗Equal Contribution. Correspondence to {gilmer, ghorbani}@google.com. A natural hypothesis is that methods which enable training at larger learning rates do so through reducing the sharpness1 of the loss surface during training. Indeed, this hypothesis has already been proposed as one of the beneficial effects of Batch Normalization (Ghorbani et al., 2019; Santurkar et al., 2018) and residual connections (Li et al., 2017), and quadratic models of the loss surface predict that optimization with SGD is unstable when λ1 > 2/η (Wu et al., 2018). However, recent empirical investigations into the relevance of quadratic stability bounds to neural network training have either focused on smaller models, focused on full batch training at small learning rates, and do not investigate connections between sharpness, model initialization and learning rate warmup (Cohen et al., 2021; Jastrzebski et al., 2020). In this work, we design a series of large scale experiments studying the evolution of the loss sharpness as we vary the learning rate, warmup period, initialization, and architectural choices. Our results demonstrate the central role that λ1 plays in neural network optimization—maintaining sufficiently small λ1 during optimization is a necessary condition for successful training at large learning rates. Consequently, reducing λ1 is a primary benefit of proper tuning of a number of architecture and optimization hyperparameters: including model initialization, location of normalization, and warmup schedule. Specifically, we show the following: • We provide large scale empirical confirmation that training of neural networks with SGD+momentum is stable only when the optimization trajectory primarily resides in a region of parameter space where λ1 . 2/η, where η denotes the learning rate. This corroborates the theoretical predictions of Wu et al. (2018) and recent empirical observations of Jastrzebski et al. (2020) and Cohen et al. (2021). • We demonstrate that several successful initialization strategies for architectures without normalization operate primarily by reducing curvature early in training, enabling training at larger learning rates. • We show that learning rate warmup gradually reduces λ1 during training, offering similar benefits to better model initialization. We connect the mechanism by which warmup operates to the dynamical stability model of Wu et al. (2018). • We show that learning rate warmup is a simple yet competitive baseline for research into better model initialization. We demonstrate that key progress in this area (Dauphin and Schoenholz, 2019; Zhang et al., 2019b; Zhu et al., 2021) can be matched by the application of learning rate warmup and/or gradient clipping alone. 1Throughout this work we will use the term sharpness to refer to the maximum eigenvalue of the loss Hessian, denoted as λ1. See Appendix B for more details. • Finally, we show that large loss curvature can result in poor scaling at large batch sizes and interventions designed to improve loss conditioning can drastically improve the model’s ability to leverage data parallelism. 2 RELATED WORK Understanding BatchNorm The loss Hessian has been a central object of study for understanding optimization of neural networks. Santurkar et al. (2018) argues that an important benefit of Batch Normalization is improved smoothness of the loss surface, while Lewkowycz et al. (2020) notes that this is improved smoothness is only observed when higher learning rates are used in combination with Batch Normalization. Our results are generally consistent with this current understanding of Batch Normalization, however some of our experiments provide additional nuance—notably we observe several instances where models suffer from training instability (and high loss curvature) early in training despite using Batch Normalization (see Section 4). Evolution of the loss Hessian Recent research has closely studied the interaction between sharpness and learning rate. Wu et al. (2018) provides a dynamical stability model which predicts that the loss curvature at convergence must satisfy2 λ1 ≤ 2/η. Recent work has provided empirical evidence that λ1 . 2/η often holds well before convergence (Cohen et al., 2021; Jastrzebski et al., 2020). Cohen et al. (2021) focused on full batch training at small learning rates, and observed “progressive sharpening”, where λ1 increases during training until λ1 ≈ 2.0/η. We observe the progressive sharpening phenomenon also occurs for many models trained with SGD, though we do not investigate batch sizes ≤ 8, where Cohen et al. (2021) argue that progressive sharpening does not occur. We note that Wu et al. (2018) equation 8 predicts the “edge of stability” is dependent on the batch size and that at small batch sizes this can be will below the 2/η bound. We confirm this prediction holds even early in training (see Appendix Figure 16). Lewkowycz et al. (2020) proves that for single hidden layer neural networks initialized at point with λ1 > 2.0/η and trained with an MSE loss may enter a “catapult” regime—where the loss increases early until a flatter region of the loss surface is found, with divergence occurring in cases where λ1 greatly exceeds 2.0/η. In contrast to the simplified setting considered in Lewkowycz et al. (2020), we find that divergence may occur even though λ1 2/η at initialization. 3 EXPERIMENTAL SETUP We investigate models trained on several benchmarks: CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) for image classification, LM1B (Chelba et al., 2013) for Language Modeling, and WMT for Neural Machine Translation (NMT). On CIFAR-10 we consider the WideResnet (Zagoruyko and Komodakis, 2016) and DenseNet (Huang et al., 2017) architectures, both with and without Batch Normalization. We consider two variants of the DenseNet architecture. The standard variant from the open sourced code of Zhu et al. (2021) is considered in Figure 5 and Table 1. A less stable variant changes the strides in the average pooling layers to (1,1) is used for Figure 2 and is denoted as Stride-(1,1) DenseNet (see Appendix D.1 for a more detailed discussion). When training without Batch Normalization we consider several initialization strategies including the default “LeCun Normal” initialization, and running MetaInit. As a way to artificially induce worse initializations, we also consider experiments where we scale every variable produced by the default initialization by a constant factor α. The NMT models are trained on the WMT’16 EN-DE training set, tuned for hyper-parameters on the WMT’16 EN-DE validation set and evaluated on the WMT’14 EN-DE test set for BLEU scores. For NMT and LM1B Language Modeling, we train 6 layer Transformer models (Vaswani et al., 2017). Inspired from Xiong et al. (2020), we experiment with three Layer Norm settings: pre-Layer Norm, post-Layer Norm (Liu et al., 2020) and no Layer Norm for the transformer models. Each model is trained with various learning rates using cosine decay (unless mentioned explicitly). For warmup experiments we use linear warmup which starts at 0 and scales linearly to a max value η before applying cosine decay. To measure the max eigenvalue of the loss Hessian we use the 2This is a simplified, potentially loose bound. See the original work for a more general bound that depends on both the loss curvature and the noise covariance matrix. Lanczos method where the number of iterations varied as needed depending on the architecture (details provided in the appendix). 4 EARLY TRAINING INSTABILITY AND THE LOSS HESSIAN In Figure 2 we plot the curvature at initialization and during training for a series of models trained on different datasets (plots showing final performance for all models can be found in the appendix). Each row indicates a different base model, the left column plots the curvature of the model at initialization and indicates with an ‘X’ whether or not the model diverges when trained without warmup. On the right we plot the measured curvature and learning rate at a specified point during training. We observe across all datasets that successful training occurs only when optimization enters a region of parameter space where λ1 ≤ 2/η, and that divergent models are outside this region shortly before divergence. At initialization, some models can be successfully trained even when they start out in the unstable region and generally speaking, divergence is more likely for models deeper in the unstable region. For CIFAR-10 WideResnet, removing batch norm results in a model with higher curvature at initialization and results in divergent models when trained with a learning rate η > .1. Scaling the WideResnet initialization up by a factor of 1.5 exacerbates the problem, resulting in even higher curvature at initialization and divergence when η > 10−2. MetaInit starts the model out at a point with very small λ1, and allows training without Batch Normalization at higher learning rates than the default initialization. We also observed that higher learning rates can be unlocked when the models are trained with learning rate warmup. Warmup was particularly effective for models which exhibit large λ1 either at initialization or early in training. Other models such as the post activation Resnet-50, and WideResnet w/ Batch Normalization did not benefit from warmup at the considered learning rates (see Appendix). For the Stride-(1,1) DenseNet experiments, it is noteworthy that the models with Batch Normalization actually start out with higher curvature than the non-BN variants. This is contrary to the generally accepted narrative that Batch Normalization improves the smoothness of the loss surface (Ghorbani et al., 2019; Santurkar et al., 2018). We found that the Batch Normalization models were more unstable than the non-BN variants here, as some models diverged at smaller learning rates. However, when combined with warmup the BN models were trainable at learning rates η > .1, whereas this did not hold for the non-BN variants, which diverge both with and without warmup at these learning rates. This result suggests that BN still offers training stability for this model, and flatter curvature mid training if trained with warmup and a higher learning rate, however no smoothness benefits are observed at initialization. See Appendix D.1 for more details on this phenomenon. For Resnet-50 trained on ImageNet we compare two different residual blocks: the preactivation block (He et al., 2016b) and the more commonly used post activation block (He et al., 2016a). For the preactivation block, we also consider flipping the order of the ReLU activation and batch normalization, as was considered in Brock et al. (2021). We find that both preactivation models start out in a region of higher curvature relative to the post activation variant, and that these models diverge when η > .5 whereas the post activation variant is trainable with learning rates as large at 10. Notably, there are several models in our experiments which diverge despite starting out in a region where λ1 < 2/η. This occurs for both the pre and post layernorm transformer, and the WideResnet model initialized with MetaInit. We found for these divergent models that the curvature rapidly increases in the initial steps of training, which is partially visible in the mid training plot where we plot the final observed curvature before divergence. Full training curves for these models can be found in the appendix. This implies that measuring λ1 at initialization is not always sufficient to predict whether or not the model will be easily trained. Currently, some architectural innovations are motivated by an analysis of either gradient statistics or smoothness at initialization (Liu et al., 2020)—a more robust analysis would consider the evolution of these statistics under SGD. 5 THE INTERACTION BETWEEN LEARNING RATE WARMUP, INITIALIZATION AND CURVATURE The success of learning rate warmup is inconsistent with conventional optimization wisdom, which traditionally suggests adapting the step size to the curvature (see for example the discussion around equation 2.4 in McCandlish et al. (2018)). However, with the understanding that λ1 is a dynamic quantity whose evolution is tightly coupled with the learning rate schedule, the benefits of a warmup period are more easily understood. We argue that the success of learning rate warmup follows naturally from two properties of training deep models: 1. Models diverge when the learning rate is too large relative to the 2/λ1 bound. 2. When the learning rate only slightly exceeds 2/λ1 optimization is unstable until the parameters move to a region with smaller λ1 (Wu et al., 2018; Lewkowycz et al., 2020). The first criteria implies that we can’t start η off at too large of a value relative to λ1 at initialization. The second criteria implies that gradually increasing η can gradually “push” the parameters to a region of parameter space where optimization is stable (with lower values of λ1). In Figure 4 there is clear evidence for this “pushing”, as during the warmup period the we see that λ1 ≈ 2.0/η holds for a large part of the warmup phase. Furthermore, this approximation holds even as we vary the length of the warmup period. Other examples can be seen in Figure 3 (B and F), and Figure 15 in the appendix. Warmup is not the only method capable of reducing λ1 during training, one can instead initialize the model in a region where λ1 starts off small. Consider for example, the points A, B and C in Figure 3. Each point shows optimization of a non-BN WideResnet with peak learning rate of .1. In (A) we see the model diverges within 3 steps without warmup using the default initialization. In (B) we see that a linear warmup period results in λ1 progressively decreasing until the peak step size of .1 is reached at step 1000, with no divergence occurring. Finally in (C) we initialize the same model with MetaInit, at which point λ1 is small at initialization, and the model can be trained at η = .1 without warmup. Similar to the aforementioned MetaInit, the success of related initialization strategies can be explained by reduced λ1 early in training. In Figure 5 (left) we look at the evolution of λ1 during the GradInit meta optimization process and compare this with simply training the same model using gradient clipping3. Both methods result in λ1 decreasing dramatically, after which λ1 hovers around 2/η. 3Similar to warmup, gradient clipping reduces the step size in regions of large curvature. Notably, GradInit starts regular training off at λ1 significantly below the 2/η bound, however the curvature quickly increases within a few steps. Given that initialization and warmup serve similar roles in reducing λ1, we expect to be able to achieve similar performance using the two methods. As shown in Table 1 we can easily match key advances in this field by applying learning rate warmup alone (see Appendix for experimental details). Beyond controlling λ1 mid-training, the learning rate η controls more general conditioning measures of the loss surface. For example, in Figure 5 we observe that even the MetaInit gradient quotient— the conditioning measure directly optimized by this initialization strategy—is controlled by η mid training. This again provides further evidence that the primary benefit of this initialization method is to reduce λ1 at initialization. As shown, any gains by optimizing the more general gradient quotient must be short lived as the initialization has no control over the long term value. 6 THE EFFECTS OF CURVATURE ON BATCH SIZE SCALING So far, we discussed how large loss curvature limits the range of stable learning rates for training. In this section, we highlight how these limits on usable learning rates affect the model’s ability to effectively leverage larger batch sizes. Previous research has studied the interplay of the loss curvature and batch size scaling from various different perspectives. Most notably, Shallue et al. (2018) observe that increasing the batch size yields consistent improvements in training speed until a (problem-dependent) critical batch size is reached; increasing the batch size beyond this threshold yields diminishing improvements in training speed. Zhang et al. (2019a) observe that a simple Noisy Quadratic Model (NQM) is able to capture some of the empirical behavior observed in Shallue et al. (2018). Similarly, McCandlish et al. (2018) use quadratic approximations to the loss to provide a closed form expression for the critical batch size as a function of the loss Hessian and the covariance of the stochastic gradient. We contribute to this literature by highlighting the role of λ1 in the batch size scaling behavior of the model. For this analysis, we focus on three of the WideResnet variants considered in Figure 2—the BatchNorm model (a low curvature model), the non BatchNorm model (with moderate curvature), and the non BatchNorm model with 1.5X init scaling (with high curvature). We train these models while sweeping both the learning rate and the batch size.4 We then measure the number of training steps required to reach 85% validation accuracy, and the optimal learning rate found for each batch size. Similar to Shallue et al. (2018), we normalize the plotted steps to 85% accuracy by the value measured at batch size 64. The results are shown in Figure 6. A few observations are in order: The low curvature model shows almost linear speedups in training speed as the batch size increases. In contrast, the high curvature model exhibits only minimal improvements in training speed with larger batch sizes. These scaling differences are closely mirrored by how the optimal learning rate η∗ changes with the batch size: for the low curvature model η∗ increases linearly with the batch size, while for the high curvature model η∗ is fixed around 3× 10−3. Notably, for the high curvature model η∗ is almost always the 4We sweep for the optimal learning rate on a log-scale grid between 10−3 and 1. For batch size, we sweep over powers of 2 from 16 to 4096. largest non-divergent value—a clear indication that the high loss curvature slows down training by preventing larger values from being used. A clear picture emerges from these observations. Previous research suggests that in order to effectively leverage larger batch sizes, one has to increase the learning rate in tandem with the batch size Jastrzębski et al. (2017); Goyal et al. (2017); Shallue et al. (2018); McCandlish et al. (2018). Our results suggest that large values of λ1 place a sharp limit on the maximum the learning rate possible and therefore, limit the model’s ability to leverage data parallelism effectively. 7 CONCLUSION Through extensive empirical experiments measuring the evolution of the loss sharpness during training, we have demonstrated how different methods such as initialization, learning rate warmup, and normalization all enable higher learning rates to be used (without causing divergence) by reducing λ1 during training. It is noteworthy that two of the most popular models we investigated (the popular post-activation variant of the Resnet-50 and the standard WideResnet 28-10) did not benefit from learning rate warmup, and exhibited small values of λ1 throughout training at the learning rates we considered. Thus researchers and practitioners who primarily work with well-optimized architectures might never notice a benefit from using warmup. However, even seemingly trivial modifications to a working architecture can easily result in large values of λ1 and thus instability early in training—a naive response to such a situation would be to dramatically reduce the learning rate or, even worse, abandon the modification being investigated all together. We hope the perspective presented in this work can help future researchers better navigate such situations, either through investigating different initializations, applying warmup and gradient clipping, or changing the location of normalization layers in the model. A LIMITATIONS Our analysis has focused primarily on models trained with SGD with momentum. This decision was motivated to reduce additional confounds that arise when using adaptive preconditioning. Notably, it is unclear what the analogue of λ1 ≤ 2/η should be for a model trained with Adam. In the appendix, we provide evidence that loss curvature adaption to the learning rate does occur even for Transformer models trained with Adam, and that learning rate warmup results in the similar effect of the optimization trajectory being “pushed” to flatter regions. However we leave a deeper analysis into this for future work. Finally, while our experiments certainly raise questions about the efficacy better model initialization has on further accelerating training, our measurements has focused primarily on the (lack of) influence initialization has on λ1 mid training. It is possible that better initializations could have lasting influence on the broader Hessian eigenspectrum (for example improving the ration λk/λ1 for smaller eigenvalues λk) and that our analysis is missing such an effect. B BRIEF REVIEW OF THE LOSS HESSIAN, EIGENVALUES AND QUADRATIC STABILITY BOUNDS For completeness, we include a formal definition of the fundamental mathematical quantities discussed in the paper. We derive most of this discussion from the relevant chapters in Horn and Johnson (2012) and Boyd et al. (2004). We refer the reader to these sources for a more detailed discussion. B.1 THE HESSIAN MATRIX The second derivative or Hessian matrix of the loss function L(·) at a point θ ∈ Rn is denoted by H(θ) ∈ Rn×n where ∀1 ≤ i, j ≤ n H(θ)i,j = ∂2L(θ) ∂θi∂θj . (1) Moreover, by Schwarz’s theorem, if the second partial derivatives of L(·) are continuous at θ, the matrix H(θ) is symmetric. This is a broad condition that holds for all the loss surfaces we examine in the main text (beyond a set of measure zero). B.2 EIGENVALUES Definition 1 Let A ∈ Rn×n. If a scalar λ and a nonzero vector x satisfy Ax = λx, λ ∈ C, x ∈ Cn, x 6= 0 (2) then λ is called an eigenvalue of A and x is called an eigenvector of A associated with λ. If A is a symmetric real matrix (such as the Hessian matrix), A can be factored as A = QΛQT , (3) where Q ∈ Rn×n is an orthogonal matrix and Λ = diag(λ1, . . . , λn) ∈ Rn×n is a real diagonal matrix. Here, {λi}ni=1 are all of the eigenvalues of A. We order λi such that λ1 ≥ λ2 · · · ≥ λn. In this ordering, λ1 corresponds to the maximum eigenvalue of A and λn corresponds to its minimum eigenvalue. The maximum and minimum eigenvalues of A satisfy the following important properties: λ1 = sup x6=0 xTAx xTx , λn = inf x 6=0 xTAx xTx . (4) In particular, for any x ∈ Rn, we have λn‖x‖22 ≤ xTAx ≤ λ1‖x‖22. B.3 STABILITY OF GRADIENT DESCENT FOR QUADRATIC LOSS Now that we have established the basics, let’s derive the stability condition for GD applied to a quadratic loss function. Note that Wu et al. (2018) and Cohen et al. (2021) provide more general bounds for the stability of SGD-type optimization algorithms. Here, we state & derive the stability condition for GD for the sake of completeness. Let L(θ) = 1 2 θTHθ, where H is a symmetric matrix with non-negative eigenvalues. Let’s consider GD dynamics starting from a random point θ0. Under GD with a fixed step-size η > 0, we have θt+1 = θt − η∇L(θt) = θt − ηHθt = (I − ηH)θt. Continuing this iteration to step 0 yields θt = (I − ηH)tθ0. (5) As t→∞, this iteration is stable iff5 the eigenvalues of (I − ηH) have absolute magnitude bounded by one, which can be stated equivalently as 1− ηλ1 ≥ −1⇐⇒ 2 ≥ λ1η ⇐⇒ 2 η ≥ λ1, which is the exact condition discussed and explored in the main text. 5As θ0 was randomly chosen, we assume it has a non-zero overlap with all eigenvectors. C MISCELLANEOUS FIGURES D PERFORMANCE OF MODELS IN FIGURE 2 In this this section, we plot the performance vs learning rate for all of the models shown in Figure 2 of the main text. These are shown in Figures 9, 13, 10, and 11. For models which diverged, we plot the best test performance achieved before divergence. In all settings, high curvature affects the final performance by limiting the use of higher learning rates. We also noted several models in Figure 2 which diverged despite training starting out in a stable region of parameter space. In Figure 12 we plot the evolution of the loss sharpness during training, showing that it quickly enters a region where λ1 > 2.0/η before diverging around step 90. D.1 DISCUSSION OF STRIDE-(1,1) DENSENET EXPERIMENTS In this section we discuss in more detail the Stride-(1,1) DenseNet experiments shown in Figure 2 in the main text. These experiments use a non-standard version of the DenseNet architecture where all average pooing strides are set to (1,1). Note the experiments in Figrue 5 and Table 1 instead use the standard strides implementation from the open sourced code of Zhu et al. (2021). The Stride-(1,1) DenseNet architecture is noteworthy because it is a counter example to common intuition that adding Batch Normalization results in flatter curvature. As shown in Figure 2 (left), the BN variants all have high curvature at initialization, however the right hand side plot shows that the mid training curvature becomes comparable to the non-BN variants. In Figure 13 we provide a more detailed analysis to understand what is happening with BN. First we plot the performance of the BN vs non-BN models both with and without warmup. The differences are striking. Without warmup, we see the BN performance is highly stochastic, some trials outperform the non-BN variants, while some trials underperform the non-BN variants. However, when trained with 1000 steps of warmup the BN variants now significantly outperform the non-BN models are all considered learning rates. They can even be successfully trained at higher learning rates than the non-BN variants, despite the high initial curvature. To provide further detail, we show the training curves of select individual runs, both the evolution of the training loss and the evolution of curvature. The BN variants all exhibit catapult behavior early—the loss increases initially until the parameters enter a region of flatter curvature. Warmup helps the BN variants, and significantly reduces the severity of the catapult phase while enabling faster long term training. Additionally, when we add warmup we find that the BN variants can now be trained at higher learning rates than the non-BN variant. As shown at learning rate of .22, the non-BN model diverges during the warmup phase despite lower initial curvature. Based on these experiments we arrive at the following conclusions. First, adding BN to the Stride(1,1) DenseNet architecture results in high curvature at initialization, which results in a short period of instability during training. However, once the parameters escape this region of large curvature, the BN variant exhibits favorable training dynamics relative to the non-BN variants. Thus, there still seems to be benefit to adding BN, assuming steps are taken to mitigate the initial period of high curvature. The fact that adding BN to a model can result in high initial curvature is not without precedent, as Yang et al. (2019) observe that adding BN to deep fully connected networks can result in exploding gradients at initialization. These experiments highlight one of the primary takeaways of this work: that maintaining flat curvature throughout training is a necessary (not sufficient) condition for stable training of neural networks. Thus it is not the presence of BN that is necessary for stable training, instead BN is generally a useful tool for reducing curvature (and thus stabilizing training). BN has clear benefits of improving curvature in most cases, but it is possible to produce configurations where adding BN paradoxically results in higher initial curvature than the non-BN variant. In these cases training is initially unstable, but once the curvature is reduced we see benefits later in training of using BN. E DETAILS ON COMPUTING THE HESSIAN EIGENSPECTRUM VIA LANCZOS We use Lanczos iterations to estimate the top eigenvalue of the Hessian. Lanczos algorithm only requires Hessian-vector products which can be efficiently computed via Pearlmutter’s trick Pearlmutter (1994). Previous research has demonstrated that this approach provides a robust and scalable framework to examine the eigenvalues of the Hessian for large neural networks Ghorbani et al. (2019); Papyan (2018). For our WMT / LM1B experiments, we run the algorithm for 45 steps while for image models, we use 40 steps. When monitoring the evolution of the top eigenvalue as a function of the number of Lanczos steps, in all cases except one, we observe that the algorithm converges. For the case of Resnet with ReLU→BN ordering, due to a very small eigengap between the top eigenvalue and the bulk, the convergence is significantly slower. We use 200 Lanczos steps in this case to alleviate the issue. For this model, estimating λ1 via power iteration (as is commonly done in the deep learning literature) will incorrectly the largest negative eigenvalue, not λ1 as desired. It is well-known that Lanczos algorithm can suffer from numerical instabilities caused by finiteprecision arithmetic. To alleviate these issues, Lanczos vectors are stored in float64 accuracy and we perform reorthogonalized at each step of the algorithm. F TRAINING DETAILS FOR TABLE 1 F.1 NEURAL MACHINE TRANSLATION Neural Machine Translation experiments are based on the Transformer models (Vaswani et al., 2017). We use separate embeddings on encoder and decoder, and a common word piece vocabulary of size 32000. For depth, we use 6 layers on both encoder and decoder. For width, we experiment with two models, namely Transformer-Base and Transformer-Wide. For Transformer-Base, we use word embeddings with 512 dimensions, 8 heads and 2048 feed-forward dimension. For TransformerWide we use word embeddings with 1024 dimensions, 16 heads and 4096 feed-forward dimension. The experiments reported in Figures 3 and 5 use Transformer-Base. The experiments reported in Table 1 use Transformer-Wide models trained with Adam (Kingma and Ba, 2014). We sweep over warm-up, learning rate, gradient clipping and init_scaling and optimize for validation loss to evaluate performance on test set BLEU reported in Table 1. All the models are trained for 60 epochs at batch size of 1024 for Transformer-Base models, and batch size of 512 for Transformer-Big models. We use dropout of 0.1, label smoothing of 0.1 and no weight decay for all these models. F.2 DENSENETS In Table 1 the ResNet-50 (w/o BN) architecture was trained for 100 epochs at batch size 512, with l2 regularization of 5e-5, dropout of .3. It was trained with SGD with nesterov momentum of .9 and learning rate of .2. We applied gradient clipping at global l2 norm of 5 and used linear learning rate warmup with warmup period of 1000 steps. For Table 1, the DenseNet-100 model was trained using the Gradinit codebase 6 by modifying the supplied DenseNet script to apply gradient clipping of norm 6 and to use the default initialization instead of GradInit. G TRAINING DETAILS FOR FIGURE 2 The WideResnet-28-10 models were trained with batch size of 1024 for 300 epochs. We applied the MixUp augmentation(Zhang et al., 2017). For learning rate warmup we used 1000 steps of linear warmup until the peak learning rate is achieved, at which point the learning rate is decayed according to the cosine schedule. The Stride-(1,1) DenseNet models were trained with batch size of 512 using the SGD optimizer with momentum of 0.9, weight decay of 5e-4, L2 regularization of 1e-4 and warmup of 1000 steps (for the models where warmup is used) followed by cosine decay. The models were trained for 200 epochs. For the DenseNet architecture we used growth_rate of 32 and reduction of 0.5. The Resnet-50 models were trained with batch size of 2048 using the SGD optimizer with nesterov momentum of .9. The learning rate schedule was the same as in the WideResnet case, with linear warmup of 1000 steps followed by cosine decay. We applied label smoothing of .1 and used the standard flip plus crop for data augmentation. The Transformer models on LM1B were trained at batch size 1024 using SGD with nesterov momentum of .9. We use embedding dimension of 512, 6 layers with 8 heads and MLP hidden dimension of 1024. The attention dropout rate was .1. The learning rate schedule followed the same recipe as in the Resnet cases. H CURVATURE ADAPTATION WITH THE ADAM OPTIMIZER The discussion in the main text focused primarily on models trained with SGD and momentum. In this appendix, we briefly examine if similar conclusions hold for optimizers such as Adam that use preconditioning. It is unclear a priori whether or not curvature adapation to the learning rate should occur for optimizers which apply preconditioning. However, given that Adam is a diagonal preconditioner applied to a non-diagonal Hessian, there may be some similar effects observed. 6https://github.com/zhuchen03/gradinit Consider a simple quadratic loss L(θ) = 1 2 θTHθ, H 0. where optimization is performed via preconditioned gradient descent with a fixed diagonal preconditioning matrix D: θt = θt−1 − ηD−1∇L(θt−1) = θt−1 − ηD−1 ( Hθt−1 ) = ( I − ηD−1H ) θt−1 = ( I − ηD−1H )t θ0 As such, this simple model would suggest that the max eigenvalue of the following matrix may be related to training instability of models trained with Adam λmax(D −1H) = λmax(D −1/2HD−1/2). (6) While (6) does not take into the account the effects of adaptive preconditioning or momentum, we find some empirical evidence that this approximation provides understanding into the stability of the optimization. Figure 14 below examines the evolution of λmax(D−1/2HD−1/2) for three Transformer models trained with Adam and different warm-up lengths. Here, D is a diagonal matrix with Di,i =√ Corrected Adam grad squared EMA + Adam . We observe that, similar to the models trained with momentum, the maximum (preconditioned) Hessian eigenvalue adapts to the warm-up schedule (green and red markers). We notice that –perhaps due to the effect of momentum or adaptive preconditioning – the threshold 2/η does not seem to be aligning with the data well. Instead, an empirically corrected threshold 40/η seems to fit the data better. We observe that instabilities in model training coincide exactly with λmax(D−1/2HD−1/2) crossing the empirically corrected threshold. These observations suggest that some of the insights discussed in the main text seem to carry over to the case of adaptive optimizers. We leave further exploration of this more complex setting to future work. I COMPUTE RESOURCES USED Nearly all experiments utilized the Google Cloud Platform with v2 cloud TPUs except for the following: The Figure 2 Resnet-50 and Stride-(1,1) Densenet experiments utilized the v3 cloud TPU, while the GradInit code was run on a cloud machine with a single V100 GPU. The Figure 2 experiments were done in parallel using up to 50 v2 TPUs concurrently over the period of a few days. Additionally, all the Machine Translation models were trained on v3 cloud TPUs.
1. What is the main contribution of the paper regarding training instabilities in deep learning models? 2. What are the strengths of the paper, particularly in terms of its empirical observations? 3. Do you have any concerns or questions about the experimental results presented in Section 4? 4. Can you explain the concept of MetaLoss briefly? 5. Why did the authors not include MetaInit as another baseline to test batch size scaling in Section 6? 6. Can you clarify the choice of learning rate used in the top left plot of Figure 6? 7. How do you interpret the log-linear improvement in BN with batch size 128, 256, 512, as shown in the bottom left plot?
Summary Of The Paper Review
Summary Of The Paper The paper explores training instabilities of deep learning models by monitoring the max eigenvalue of the Hessian in i) different architectures; ii) different optimization tricks; iii) different stages of training. The paper is a mix of several empirical observations: (Section 4) One of the main observations is that the necessary (not sufficient) requirement for a "successful" training is to have relatively low max eigenvalue through the training process, where relative is roughly determined by the inverse of the learning rate. (Section 5) Second main observation is specific to the learning rate warmup strategy: the warmup strategy pushes max eigenvalue to the boundary below 2 / η , and with a better understanding of this optimization strategy, the authors show comparable performance in multiple datasets compared with other optimization tricks. (Section 6) Lastly, the authors explore batch size scaling with different scales of max eigenvalue, focusing on comparing BN (with small curvature only for that specific task), NoBN (larger curvature), NoBN 1.5xinit (highest curvature). The experiment shows for small curvature setting, both batch size and learning rate can scale quite well. Review Strengths: The paper includes quite a lot of experimental results and definitely provides valuable empirical observations (see them in the brief summary above). It gives an insightful and convincing understanding of the learning rate warmup strategy. It also points out several failure cases where good curvature does not guarantee convergence in training, which is equally valuable. Here are the questions for each section: Section 4. I see slightly different max eigenvalue at initialization for same architecture with different learning rate -- is it fully due to the randomness in the initialization? Or do I miss something here? Section 5. Explain MetaLoss briefly. Section 6. Given Figure 3, why would the authors not include MetaInit as another baseline to test batch size scaling? In Figure 6, what learning rate is used for the Top left plot? I'd guess it's the optimal (peak) learning rate per batch size, but it's not fully clear. If not, please clarify and better help me understand how the learning rate is chosen; If yes, it looks like BN with batch size 128, 256, 512 has a log-linear improvement from the top left plot, but not very log-linear if looking at the bottom left plot: improvement from 128 to 256 is much larger than the improvement from 256 to 512. Similar results for NoBN 1.5x init. So I may be missing something here.
ICLR
Title A Loss Curvature Perspective on Training Instabilities of Deep Learning Models Abstract In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. N/A In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. 1 INTRODUCTION Optimization of neural networks can easily fail. While recent architectural advances such as skip connections (He et al., 2016a) and Batch Normalization (Ioffe and Szegedy, 2015) have been applied successfully to produce architectures and hyperparameters that reliably train well, even small changes to a trainable configuration can easily result in training that diverges. More generally, producing a configuration that strikes the right balance between stable training and rapid optimization progress on a new domain can be difficult—practitioners and researchers have few reliable heuristics to guide them through the process. As a result, the specific hyperparameter tuning protocol has an outsized influence on the results (Choi et al., 2019; Sivaprasad et al., 2020) and successes often rely on large hyperparameter searches (Nado et al., 2021). Developing a principled understanding of what makes general architectures trainable would allow researchers to more reliably navigate this process and has the potential to dramatically accelerate research into finding better, more scalable architectures. The focus of the empirical investigation of this work is to better understand what limits the maximum trainable learning rate for deep learning models trained with the typical minibatch stochastic gradient descent (SGD) family algorithms. As part of this investigation, we examine several methods developed by the deep learning community that have enabled training at larger learning rates and improved performance. Many methods have been developed that can achieve this goal, notably normalization, learning rate warmup, gradient clipping (Pascanu et al., 2013), and better model initializations such as Fixup (Zhang et al., 2019b), MetaInit (Dauphin and Schoenholz, 2019), and GradInit (Zhu et al., 2021). While these methods are certainly not exactly equivalent, a key property they all have in common is that they can enable training at larger learning rates when applied to certain models (see for example Figure 1). ∗Equal Contribution. Correspondence to {gilmer, ghorbani}@google.com. A natural hypothesis is that methods which enable training at larger learning rates do so through reducing the sharpness1 of the loss surface during training. Indeed, this hypothesis has already been proposed as one of the beneficial effects of Batch Normalization (Ghorbani et al., 2019; Santurkar et al., 2018) and residual connections (Li et al., 2017), and quadratic models of the loss surface predict that optimization with SGD is unstable when λ1 > 2/η (Wu et al., 2018). However, recent empirical investigations into the relevance of quadratic stability bounds to neural network training have either focused on smaller models, focused on full batch training at small learning rates, and do not investigate connections between sharpness, model initialization and learning rate warmup (Cohen et al., 2021; Jastrzebski et al., 2020). In this work, we design a series of large scale experiments studying the evolution of the loss sharpness as we vary the learning rate, warmup period, initialization, and architectural choices. Our results demonstrate the central role that λ1 plays in neural network optimization—maintaining sufficiently small λ1 during optimization is a necessary condition for successful training at large learning rates. Consequently, reducing λ1 is a primary benefit of proper tuning of a number of architecture and optimization hyperparameters: including model initialization, location of normalization, and warmup schedule. Specifically, we show the following: • We provide large scale empirical confirmation that training of neural networks with SGD+momentum is stable only when the optimization trajectory primarily resides in a region of parameter space where λ1 . 2/η, where η denotes the learning rate. This corroborates the theoretical predictions of Wu et al. (2018) and recent empirical observations of Jastrzebski et al. (2020) and Cohen et al. (2021). • We demonstrate that several successful initialization strategies for architectures without normalization operate primarily by reducing curvature early in training, enabling training at larger learning rates. • We show that learning rate warmup gradually reduces λ1 during training, offering similar benefits to better model initialization. We connect the mechanism by which warmup operates to the dynamical stability model of Wu et al. (2018). • We show that learning rate warmup is a simple yet competitive baseline for research into better model initialization. We demonstrate that key progress in this area (Dauphin and Schoenholz, 2019; Zhang et al., 2019b; Zhu et al., 2021) can be matched by the application of learning rate warmup and/or gradient clipping alone. 1Throughout this work we will use the term sharpness to refer to the maximum eigenvalue of the loss Hessian, denoted as λ1. See Appendix B for more details. • Finally, we show that large loss curvature can result in poor scaling at large batch sizes and interventions designed to improve loss conditioning can drastically improve the model’s ability to leverage data parallelism. 2 RELATED WORK Understanding BatchNorm The loss Hessian has been a central object of study for understanding optimization of neural networks. Santurkar et al. (2018) argues that an important benefit of Batch Normalization is improved smoothness of the loss surface, while Lewkowycz et al. (2020) notes that this is improved smoothness is only observed when higher learning rates are used in combination with Batch Normalization. Our results are generally consistent with this current understanding of Batch Normalization, however some of our experiments provide additional nuance—notably we observe several instances where models suffer from training instability (and high loss curvature) early in training despite using Batch Normalization (see Section 4). Evolution of the loss Hessian Recent research has closely studied the interaction between sharpness and learning rate. Wu et al. (2018) provides a dynamical stability model which predicts that the loss curvature at convergence must satisfy2 λ1 ≤ 2/η. Recent work has provided empirical evidence that λ1 . 2/η often holds well before convergence (Cohen et al., 2021; Jastrzebski et al., 2020). Cohen et al. (2021) focused on full batch training at small learning rates, and observed “progressive sharpening”, where λ1 increases during training until λ1 ≈ 2.0/η. We observe the progressive sharpening phenomenon also occurs for many models trained with SGD, though we do not investigate batch sizes ≤ 8, where Cohen et al. (2021) argue that progressive sharpening does not occur. We note that Wu et al. (2018) equation 8 predicts the “edge of stability” is dependent on the batch size and that at small batch sizes this can be will below the 2/η bound. We confirm this prediction holds even early in training (see Appendix Figure 16). Lewkowycz et al. (2020) proves that for single hidden layer neural networks initialized at point with λ1 > 2.0/η and trained with an MSE loss may enter a “catapult” regime—where the loss increases early until a flatter region of the loss surface is found, with divergence occurring in cases where λ1 greatly exceeds 2.0/η. In contrast to the simplified setting considered in Lewkowycz et al. (2020), we find that divergence may occur even though λ1 2/η at initialization. 3 EXPERIMENTAL SETUP We investigate models trained on several benchmarks: CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) for image classification, LM1B (Chelba et al., 2013) for Language Modeling, and WMT for Neural Machine Translation (NMT). On CIFAR-10 we consider the WideResnet (Zagoruyko and Komodakis, 2016) and DenseNet (Huang et al., 2017) architectures, both with and without Batch Normalization. We consider two variants of the DenseNet architecture. The standard variant from the open sourced code of Zhu et al. (2021) is considered in Figure 5 and Table 1. A less stable variant changes the strides in the average pooling layers to (1,1) is used for Figure 2 and is denoted as Stride-(1,1) DenseNet (see Appendix D.1 for a more detailed discussion). When training without Batch Normalization we consider several initialization strategies including the default “LeCun Normal” initialization, and running MetaInit. As a way to artificially induce worse initializations, we also consider experiments where we scale every variable produced by the default initialization by a constant factor α. The NMT models are trained on the WMT’16 EN-DE training set, tuned for hyper-parameters on the WMT’16 EN-DE validation set and evaluated on the WMT’14 EN-DE test set for BLEU scores. For NMT and LM1B Language Modeling, we train 6 layer Transformer models (Vaswani et al., 2017). Inspired from Xiong et al. (2020), we experiment with three Layer Norm settings: pre-Layer Norm, post-Layer Norm (Liu et al., 2020) and no Layer Norm for the transformer models. Each model is trained with various learning rates using cosine decay (unless mentioned explicitly). For warmup experiments we use linear warmup which starts at 0 and scales linearly to a max value η before applying cosine decay. To measure the max eigenvalue of the loss Hessian we use the 2This is a simplified, potentially loose bound. See the original work for a more general bound that depends on both the loss curvature and the noise covariance matrix. Lanczos method where the number of iterations varied as needed depending on the architecture (details provided in the appendix). 4 EARLY TRAINING INSTABILITY AND THE LOSS HESSIAN In Figure 2 we plot the curvature at initialization and during training for a series of models trained on different datasets (plots showing final performance for all models can be found in the appendix). Each row indicates a different base model, the left column plots the curvature of the model at initialization and indicates with an ‘X’ whether or not the model diverges when trained without warmup. On the right we plot the measured curvature and learning rate at a specified point during training. We observe across all datasets that successful training occurs only when optimization enters a region of parameter space where λ1 ≤ 2/η, and that divergent models are outside this region shortly before divergence. At initialization, some models can be successfully trained even when they start out in the unstable region and generally speaking, divergence is more likely for models deeper in the unstable region. For CIFAR-10 WideResnet, removing batch norm results in a model with higher curvature at initialization and results in divergent models when trained with a learning rate η > .1. Scaling the WideResnet initialization up by a factor of 1.5 exacerbates the problem, resulting in even higher curvature at initialization and divergence when η > 10−2. MetaInit starts the model out at a point with very small λ1, and allows training without Batch Normalization at higher learning rates than the default initialization. We also observed that higher learning rates can be unlocked when the models are trained with learning rate warmup. Warmup was particularly effective for models which exhibit large λ1 either at initialization or early in training. Other models such as the post activation Resnet-50, and WideResnet w/ Batch Normalization did not benefit from warmup at the considered learning rates (see Appendix). For the Stride-(1,1) DenseNet experiments, it is noteworthy that the models with Batch Normalization actually start out with higher curvature than the non-BN variants. This is contrary to the generally accepted narrative that Batch Normalization improves the smoothness of the loss surface (Ghorbani et al., 2019; Santurkar et al., 2018). We found that the Batch Normalization models were more unstable than the non-BN variants here, as some models diverged at smaller learning rates. However, when combined with warmup the BN models were trainable at learning rates η > .1, whereas this did not hold for the non-BN variants, which diverge both with and without warmup at these learning rates. This result suggests that BN still offers training stability for this model, and flatter curvature mid training if trained with warmup and a higher learning rate, however no smoothness benefits are observed at initialization. See Appendix D.1 for more details on this phenomenon. For Resnet-50 trained on ImageNet we compare two different residual blocks: the preactivation block (He et al., 2016b) and the more commonly used post activation block (He et al., 2016a). For the preactivation block, we also consider flipping the order of the ReLU activation and batch normalization, as was considered in Brock et al. (2021). We find that both preactivation models start out in a region of higher curvature relative to the post activation variant, and that these models diverge when η > .5 whereas the post activation variant is trainable with learning rates as large at 10. Notably, there are several models in our experiments which diverge despite starting out in a region where λ1 < 2/η. This occurs for both the pre and post layernorm transformer, and the WideResnet model initialized with MetaInit. We found for these divergent models that the curvature rapidly increases in the initial steps of training, which is partially visible in the mid training plot where we plot the final observed curvature before divergence. Full training curves for these models can be found in the appendix. This implies that measuring λ1 at initialization is not always sufficient to predict whether or not the model will be easily trained. Currently, some architectural innovations are motivated by an analysis of either gradient statistics or smoothness at initialization (Liu et al., 2020)—a more robust analysis would consider the evolution of these statistics under SGD. 5 THE INTERACTION BETWEEN LEARNING RATE WARMUP, INITIALIZATION AND CURVATURE The success of learning rate warmup is inconsistent with conventional optimization wisdom, which traditionally suggests adapting the step size to the curvature (see for example the discussion around equation 2.4 in McCandlish et al. (2018)). However, with the understanding that λ1 is a dynamic quantity whose evolution is tightly coupled with the learning rate schedule, the benefits of a warmup period are more easily understood. We argue that the success of learning rate warmup follows naturally from two properties of training deep models: 1. Models diverge when the learning rate is too large relative to the 2/λ1 bound. 2. When the learning rate only slightly exceeds 2/λ1 optimization is unstable until the parameters move to a region with smaller λ1 (Wu et al., 2018; Lewkowycz et al., 2020). The first criteria implies that we can’t start η off at too large of a value relative to λ1 at initialization. The second criteria implies that gradually increasing η can gradually “push” the parameters to a region of parameter space where optimization is stable (with lower values of λ1). In Figure 4 there is clear evidence for this “pushing”, as during the warmup period the we see that λ1 ≈ 2.0/η holds for a large part of the warmup phase. Furthermore, this approximation holds even as we vary the length of the warmup period. Other examples can be seen in Figure 3 (B and F), and Figure 15 in the appendix. Warmup is not the only method capable of reducing λ1 during training, one can instead initialize the model in a region where λ1 starts off small. Consider for example, the points A, B and C in Figure 3. Each point shows optimization of a non-BN WideResnet with peak learning rate of .1. In (A) we see the model diverges within 3 steps without warmup using the default initialization. In (B) we see that a linear warmup period results in λ1 progressively decreasing until the peak step size of .1 is reached at step 1000, with no divergence occurring. Finally in (C) we initialize the same model with MetaInit, at which point λ1 is small at initialization, and the model can be trained at η = .1 without warmup. Similar to the aforementioned MetaInit, the success of related initialization strategies can be explained by reduced λ1 early in training. In Figure 5 (left) we look at the evolution of λ1 during the GradInit meta optimization process and compare this with simply training the same model using gradient clipping3. Both methods result in λ1 decreasing dramatically, after which λ1 hovers around 2/η. 3Similar to warmup, gradient clipping reduces the step size in regions of large curvature. Notably, GradInit starts regular training off at λ1 significantly below the 2/η bound, however the curvature quickly increases within a few steps. Given that initialization and warmup serve similar roles in reducing λ1, we expect to be able to achieve similar performance using the two methods. As shown in Table 1 we can easily match key advances in this field by applying learning rate warmup alone (see Appendix for experimental details). Beyond controlling λ1 mid-training, the learning rate η controls more general conditioning measures of the loss surface. For example, in Figure 5 we observe that even the MetaInit gradient quotient— the conditioning measure directly optimized by this initialization strategy—is controlled by η mid training. This again provides further evidence that the primary benefit of this initialization method is to reduce λ1 at initialization. As shown, any gains by optimizing the more general gradient quotient must be short lived as the initialization has no control over the long term value. 6 THE EFFECTS OF CURVATURE ON BATCH SIZE SCALING So far, we discussed how large loss curvature limits the range of stable learning rates for training. In this section, we highlight how these limits on usable learning rates affect the model’s ability to effectively leverage larger batch sizes. Previous research has studied the interplay of the loss curvature and batch size scaling from various different perspectives. Most notably, Shallue et al. (2018) observe that increasing the batch size yields consistent improvements in training speed until a (problem-dependent) critical batch size is reached; increasing the batch size beyond this threshold yields diminishing improvements in training speed. Zhang et al. (2019a) observe that a simple Noisy Quadratic Model (NQM) is able to capture some of the empirical behavior observed in Shallue et al. (2018). Similarly, McCandlish et al. (2018) use quadratic approximations to the loss to provide a closed form expression for the critical batch size as a function of the loss Hessian and the covariance of the stochastic gradient. We contribute to this literature by highlighting the role of λ1 in the batch size scaling behavior of the model. For this analysis, we focus on three of the WideResnet variants considered in Figure 2—the BatchNorm model (a low curvature model), the non BatchNorm model (with moderate curvature), and the non BatchNorm model with 1.5X init scaling (with high curvature). We train these models while sweeping both the learning rate and the batch size.4 We then measure the number of training steps required to reach 85% validation accuracy, and the optimal learning rate found for each batch size. Similar to Shallue et al. (2018), we normalize the plotted steps to 85% accuracy by the value measured at batch size 64. The results are shown in Figure 6. A few observations are in order: The low curvature model shows almost linear speedups in training speed as the batch size increases. In contrast, the high curvature model exhibits only minimal improvements in training speed with larger batch sizes. These scaling differences are closely mirrored by how the optimal learning rate η∗ changes with the batch size: for the low curvature model η∗ increases linearly with the batch size, while for the high curvature model η∗ is fixed around 3× 10−3. Notably, for the high curvature model η∗ is almost always the 4We sweep for the optimal learning rate on a log-scale grid between 10−3 and 1. For batch size, we sweep over powers of 2 from 16 to 4096. largest non-divergent value—a clear indication that the high loss curvature slows down training by preventing larger values from being used. A clear picture emerges from these observations. Previous research suggests that in order to effectively leverage larger batch sizes, one has to increase the learning rate in tandem with the batch size Jastrzębski et al. (2017); Goyal et al. (2017); Shallue et al. (2018); McCandlish et al. (2018). Our results suggest that large values of λ1 place a sharp limit on the maximum the learning rate possible and therefore, limit the model’s ability to leverage data parallelism effectively. 7 CONCLUSION Through extensive empirical experiments measuring the evolution of the loss sharpness during training, we have demonstrated how different methods such as initialization, learning rate warmup, and normalization all enable higher learning rates to be used (without causing divergence) by reducing λ1 during training. It is noteworthy that two of the most popular models we investigated (the popular post-activation variant of the Resnet-50 and the standard WideResnet 28-10) did not benefit from learning rate warmup, and exhibited small values of λ1 throughout training at the learning rates we considered. Thus researchers and practitioners who primarily work with well-optimized architectures might never notice a benefit from using warmup. However, even seemingly trivial modifications to a working architecture can easily result in large values of λ1 and thus instability early in training—a naive response to such a situation would be to dramatically reduce the learning rate or, even worse, abandon the modification being investigated all together. We hope the perspective presented in this work can help future researchers better navigate such situations, either through investigating different initializations, applying warmup and gradient clipping, or changing the location of normalization layers in the model. A LIMITATIONS Our analysis has focused primarily on models trained with SGD with momentum. This decision was motivated to reduce additional confounds that arise when using adaptive preconditioning. Notably, it is unclear what the analogue of λ1 ≤ 2/η should be for a model trained with Adam. In the appendix, we provide evidence that loss curvature adaption to the learning rate does occur even for Transformer models trained with Adam, and that learning rate warmup results in the similar effect of the optimization trajectory being “pushed” to flatter regions. However we leave a deeper analysis into this for future work. Finally, while our experiments certainly raise questions about the efficacy better model initialization has on further accelerating training, our measurements has focused primarily on the (lack of) influence initialization has on λ1 mid training. It is possible that better initializations could have lasting influence on the broader Hessian eigenspectrum (for example improving the ration λk/λ1 for smaller eigenvalues λk) and that our analysis is missing such an effect. B BRIEF REVIEW OF THE LOSS HESSIAN, EIGENVALUES AND QUADRATIC STABILITY BOUNDS For completeness, we include a formal definition of the fundamental mathematical quantities discussed in the paper. We derive most of this discussion from the relevant chapters in Horn and Johnson (2012) and Boyd et al. (2004). We refer the reader to these sources for a more detailed discussion. B.1 THE HESSIAN MATRIX The second derivative or Hessian matrix of the loss function L(·) at a point θ ∈ Rn is denoted by H(θ) ∈ Rn×n where ∀1 ≤ i, j ≤ n H(θ)i,j = ∂2L(θ) ∂θi∂θj . (1) Moreover, by Schwarz’s theorem, if the second partial derivatives of L(·) are continuous at θ, the matrix H(θ) is symmetric. This is a broad condition that holds for all the loss surfaces we examine in the main text (beyond a set of measure zero). B.2 EIGENVALUES Definition 1 Let A ∈ Rn×n. If a scalar λ and a nonzero vector x satisfy Ax = λx, λ ∈ C, x ∈ Cn, x 6= 0 (2) then λ is called an eigenvalue of A and x is called an eigenvector of A associated with λ. If A is a symmetric real matrix (such as the Hessian matrix), A can be factored as A = QΛQT , (3) where Q ∈ Rn×n is an orthogonal matrix and Λ = diag(λ1, . . . , λn) ∈ Rn×n is a real diagonal matrix. Here, {λi}ni=1 are all of the eigenvalues of A. We order λi such that λ1 ≥ λ2 · · · ≥ λn. In this ordering, λ1 corresponds to the maximum eigenvalue of A and λn corresponds to its minimum eigenvalue. The maximum and minimum eigenvalues of A satisfy the following important properties: λ1 = sup x6=0 xTAx xTx , λn = inf x 6=0 xTAx xTx . (4) In particular, for any x ∈ Rn, we have λn‖x‖22 ≤ xTAx ≤ λ1‖x‖22. B.3 STABILITY OF GRADIENT DESCENT FOR QUADRATIC LOSS Now that we have established the basics, let’s derive the stability condition for GD applied to a quadratic loss function. Note that Wu et al. (2018) and Cohen et al. (2021) provide more general bounds for the stability of SGD-type optimization algorithms. Here, we state & derive the stability condition for GD for the sake of completeness. Let L(θ) = 1 2 θTHθ, where H is a symmetric matrix with non-negative eigenvalues. Let’s consider GD dynamics starting from a random point θ0. Under GD with a fixed step-size η > 0, we have θt+1 = θt − η∇L(θt) = θt − ηHθt = (I − ηH)θt. Continuing this iteration to step 0 yields θt = (I − ηH)tθ0. (5) As t→∞, this iteration is stable iff5 the eigenvalues of (I − ηH) have absolute magnitude bounded by one, which can be stated equivalently as 1− ηλ1 ≥ −1⇐⇒ 2 ≥ λ1η ⇐⇒ 2 η ≥ λ1, which is the exact condition discussed and explored in the main text. 5As θ0 was randomly chosen, we assume it has a non-zero overlap with all eigenvectors. C MISCELLANEOUS FIGURES D PERFORMANCE OF MODELS IN FIGURE 2 In this this section, we plot the performance vs learning rate for all of the models shown in Figure 2 of the main text. These are shown in Figures 9, 13, 10, and 11. For models which diverged, we plot the best test performance achieved before divergence. In all settings, high curvature affects the final performance by limiting the use of higher learning rates. We also noted several models in Figure 2 which diverged despite training starting out in a stable region of parameter space. In Figure 12 we plot the evolution of the loss sharpness during training, showing that it quickly enters a region where λ1 > 2.0/η before diverging around step 90. D.1 DISCUSSION OF STRIDE-(1,1) DENSENET EXPERIMENTS In this section we discuss in more detail the Stride-(1,1) DenseNet experiments shown in Figure 2 in the main text. These experiments use a non-standard version of the DenseNet architecture where all average pooing strides are set to (1,1). Note the experiments in Figrue 5 and Table 1 instead use the standard strides implementation from the open sourced code of Zhu et al. (2021). The Stride-(1,1) DenseNet architecture is noteworthy because it is a counter example to common intuition that adding Batch Normalization results in flatter curvature. As shown in Figure 2 (left), the BN variants all have high curvature at initialization, however the right hand side plot shows that the mid training curvature becomes comparable to the non-BN variants. In Figure 13 we provide a more detailed analysis to understand what is happening with BN. First we plot the performance of the BN vs non-BN models both with and without warmup. The differences are striking. Without warmup, we see the BN performance is highly stochastic, some trials outperform the non-BN variants, while some trials underperform the non-BN variants. However, when trained with 1000 steps of warmup the BN variants now significantly outperform the non-BN models are all considered learning rates. They can even be successfully trained at higher learning rates than the non-BN variants, despite the high initial curvature. To provide further detail, we show the training curves of select individual runs, both the evolution of the training loss and the evolution of curvature. The BN variants all exhibit catapult behavior early—the loss increases initially until the parameters enter a region of flatter curvature. Warmup helps the BN variants, and significantly reduces the severity of the catapult phase while enabling faster long term training. Additionally, when we add warmup we find that the BN variants can now be trained at higher learning rates than the non-BN variant. As shown at learning rate of .22, the non-BN model diverges during the warmup phase despite lower initial curvature. Based on these experiments we arrive at the following conclusions. First, adding BN to the Stride(1,1) DenseNet architecture results in high curvature at initialization, which results in a short period of instability during training. However, once the parameters escape this region of large curvature, the BN variant exhibits favorable training dynamics relative to the non-BN variants. Thus, there still seems to be benefit to adding BN, assuming steps are taken to mitigate the initial period of high curvature. The fact that adding BN to a model can result in high initial curvature is not without precedent, as Yang et al. (2019) observe that adding BN to deep fully connected networks can result in exploding gradients at initialization. These experiments highlight one of the primary takeaways of this work: that maintaining flat curvature throughout training is a necessary (not sufficient) condition for stable training of neural networks. Thus it is not the presence of BN that is necessary for stable training, instead BN is generally a useful tool for reducing curvature (and thus stabilizing training). BN has clear benefits of improving curvature in most cases, but it is possible to produce configurations where adding BN paradoxically results in higher initial curvature than the non-BN variant. In these cases training is initially unstable, but once the curvature is reduced we see benefits later in training of using BN. E DETAILS ON COMPUTING THE HESSIAN EIGENSPECTRUM VIA LANCZOS We use Lanczos iterations to estimate the top eigenvalue of the Hessian. Lanczos algorithm only requires Hessian-vector products which can be efficiently computed via Pearlmutter’s trick Pearlmutter (1994). Previous research has demonstrated that this approach provides a robust and scalable framework to examine the eigenvalues of the Hessian for large neural networks Ghorbani et al. (2019); Papyan (2018). For our WMT / LM1B experiments, we run the algorithm for 45 steps while for image models, we use 40 steps. When monitoring the evolution of the top eigenvalue as a function of the number of Lanczos steps, in all cases except one, we observe that the algorithm converges. For the case of Resnet with ReLU→BN ordering, due to a very small eigengap between the top eigenvalue and the bulk, the convergence is significantly slower. We use 200 Lanczos steps in this case to alleviate the issue. For this model, estimating λ1 via power iteration (as is commonly done in the deep learning literature) will incorrectly the largest negative eigenvalue, not λ1 as desired. It is well-known that Lanczos algorithm can suffer from numerical instabilities caused by finiteprecision arithmetic. To alleviate these issues, Lanczos vectors are stored in float64 accuracy and we perform reorthogonalized at each step of the algorithm. F TRAINING DETAILS FOR TABLE 1 F.1 NEURAL MACHINE TRANSLATION Neural Machine Translation experiments are based on the Transformer models (Vaswani et al., 2017). We use separate embeddings on encoder and decoder, and a common word piece vocabulary of size 32000. For depth, we use 6 layers on both encoder and decoder. For width, we experiment with two models, namely Transformer-Base and Transformer-Wide. For Transformer-Base, we use word embeddings with 512 dimensions, 8 heads and 2048 feed-forward dimension. For TransformerWide we use word embeddings with 1024 dimensions, 16 heads and 4096 feed-forward dimension. The experiments reported in Figures 3 and 5 use Transformer-Base. The experiments reported in Table 1 use Transformer-Wide models trained with Adam (Kingma and Ba, 2014). We sweep over warm-up, learning rate, gradient clipping and init_scaling and optimize for validation loss to evaluate performance on test set BLEU reported in Table 1. All the models are trained for 60 epochs at batch size of 1024 for Transformer-Base models, and batch size of 512 for Transformer-Big models. We use dropout of 0.1, label smoothing of 0.1 and no weight decay for all these models. F.2 DENSENETS In Table 1 the ResNet-50 (w/o BN) architecture was trained for 100 epochs at batch size 512, with l2 regularization of 5e-5, dropout of .3. It was trained with SGD with nesterov momentum of .9 and learning rate of .2. We applied gradient clipping at global l2 norm of 5 and used linear learning rate warmup with warmup period of 1000 steps. For Table 1, the DenseNet-100 model was trained using the Gradinit codebase 6 by modifying the supplied DenseNet script to apply gradient clipping of norm 6 and to use the default initialization instead of GradInit. G TRAINING DETAILS FOR FIGURE 2 The WideResnet-28-10 models were trained with batch size of 1024 for 300 epochs. We applied the MixUp augmentation(Zhang et al., 2017). For learning rate warmup we used 1000 steps of linear warmup until the peak learning rate is achieved, at which point the learning rate is decayed according to the cosine schedule. The Stride-(1,1) DenseNet models were trained with batch size of 512 using the SGD optimizer with momentum of 0.9, weight decay of 5e-4, L2 regularization of 1e-4 and warmup of 1000 steps (for the models where warmup is used) followed by cosine decay. The models were trained for 200 epochs. For the DenseNet architecture we used growth_rate of 32 and reduction of 0.5. The Resnet-50 models were trained with batch size of 2048 using the SGD optimizer with nesterov momentum of .9. The learning rate schedule was the same as in the WideResnet case, with linear warmup of 1000 steps followed by cosine decay. We applied label smoothing of .1 and used the standard flip plus crop for data augmentation. The Transformer models on LM1B were trained at batch size 1024 using SGD with nesterov momentum of .9. We use embedding dimension of 512, 6 layers with 8 heads and MLP hidden dimension of 1024. The attention dropout rate was .1. The learning rate schedule followed the same recipe as in the Resnet cases. H CURVATURE ADAPTATION WITH THE ADAM OPTIMIZER The discussion in the main text focused primarily on models trained with SGD and momentum. In this appendix, we briefly examine if similar conclusions hold for optimizers such as Adam that use preconditioning. It is unclear a priori whether or not curvature adapation to the learning rate should occur for optimizers which apply preconditioning. However, given that Adam is a diagonal preconditioner applied to a non-diagonal Hessian, there may be some similar effects observed. 6https://github.com/zhuchen03/gradinit Consider a simple quadratic loss L(θ) = 1 2 θTHθ, H 0. where optimization is performed via preconditioned gradient descent with a fixed diagonal preconditioning matrix D: θt = θt−1 − ηD−1∇L(θt−1) = θt−1 − ηD−1 ( Hθt−1 ) = ( I − ηD−1H ) θt−1 = ( I − ηD−1H )t θ0 As such, this simple model would suggest that the max eigenvalue of the following matrix may be related to training instability of models trained with Adam λmax(D −1H) = λmax(D −1/2HD−1/2). (6) While (6) does not take into the account the effects of adaptive preconditioning or momentum, we find some empirical evidence that this approximation provides understanding into the stability of the optimization. Figure 14 below examines the evolution of λmax(D−1/2HD−1/2) for three Transformer models trained with Adam and different warm-up lengths. Here, D is a diagonal matrix with Di,i =√ Corrected Adam grad squared EMA + Adam . We observe that, similar to the models trained with momentum, the maximum (preconditioned) Hessian eigenvalue adapts to the warm-up schedule (green and red markers). We notice that –perhaps due to the effect of momentum or adaptive preconditioning – the threshold 2/η does not seem to be aligning with the data well. Instead, an empirically corrected threshold 40/η seems to fit the data better. We observe that instabilities in model training coincide exactly with λmax(D−1/2HD−1/2) crossing the empirically corrected threshold. These observations suggest that some of the insights discussed in the main text seem to carry over to the case of adaptive optimizers. We leave further exploration of this more complex setting to future work. I COMPUTE RESOURCES USED Nearly all experiments utilized the Google Cloud Platform with v2 cloud TPUs except for the following: The Figure 2 Resnet-50 and Stride-(1,1) Densenet experiments utilized the v3 cloud TPU, while the GradInit code was run on a cloud machine with a single V100 GPU. The Figure 2 experiments were done in parallel using up to 50 v2 TPUs concurrently over the period of a few days. Additionally, all the Machine Translation models were trained on v3 cloud TPUs.
1. What are the main contributions and key findings of the paper regarding neural network trainability? 2. How does the paper investigate the factors that play an important role in making neural networks trainable? 3. What is the significance of the largest eigenvalue of the Hessian matrix in the paper's findings? 4. How do the authors demonstrate that learning rate warmup can provide a significant improvement? 5. Are there any limitations or areas for further investigation in the paper's approach or conclusions?
Summary Of The Paper Review
Summary Of The Paper The paper performs extensive empirical investigation into factors that play important role in making neural network trainable. One of the key quantity that the authors find to give significant insights is the largest eigenvalue of the Hessian matrix. They show that models that train successfully tend to have learning rates close to the well known critical bound for GD - 2 / \lambda. Further investigation to various architectural choices - such as normalizations and initialization techniques indicate that the value of \lambda behave somewhat consistently after the initial training period for well training models. In addition, they demonstrate that learning rate wramup can provide a significant improvement by pushing parameters in regions with lower values of \lambda. Review The paper seems well written and manages I think to condense a significant amount of empirical work in the short limit we have. I think the analysis that has been done across multiple architectures and models provides a good basis for the authors claims in the text. Strengths: The paper contains a very solid set of large scale and in-depth evaluation, which is something well needed for this kind of empirical work. The different analysis done well presents the vast amount of results and the key takeaways from the experiments Weaknesses: Since we do observe models that diverge despite beginning in stable regions (Fig. 3 D) it seems that there the initial warnup has a significant effect of where it takes the model beyond its initialization, which is not too well discussed. In the Dense Net experiment in Table 1, there is no reported results for including warmup
ICLR
Title A Loss Curvature Perspective on Training Instabilities of Deep Learning Models Abstract In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. N/A In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid— or navigate out of—regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization. 1 INTRODUCTION Optimization of neural networks can easily fail. While recent architectural advances such as skip connections (He et al., 2016a) and Batch Normalization (Ioffe and Szegedy, 2015) have been applied successfully to produce architectures and hyperparameters that reliably train well, even small changes to a trainable configuration can easily result in training that diverges. More generally, producing a configuration that strikes the right balance between stable training and rapid optimization progress on a new domain can be difficult—practitioners and researchers have few reliable heuristics to guide them through the process. As a result, the specific hyperparameter tuning protocol has an outsized influence on the results (Choi et al., 2019; Sivaprasad et al., 2020) and successes often rely on large hyperparameter searches (Nado et al., 2021). Developing a principled understanding of what makes general architectures trainable would allow researchers to more reliably navigate this process and has the potential to dramatically accelerate research into finding better, more scalable architectures. The focus of the empirical investigation of this work is to better understand what limits the maximum trainable learning rate for deep learning models trained with the typical minibatch stochastic gradient descent (SGD) family algorithms. As part of this investigation, we examine several methods developed by the deep learning community that have enabled training at larger learning rates and improved performance. Many methods have been developed that can achieve this goal, notably normalization, learning rate warmup, gradient clipping (Pascanu et al., 2013), and better model initializations such as Fixup (Zhang et al., 2019b), MetaInit (Dauphin and Schoenholz, 2019), and GradInit (Zhu et al., 2021). While these methods are certainly not exactly equivalent, a key property they all have in common is that they can enable training at larger learning rates when applied to certain models (see for example Figure 1). ∗Equal Contribution. Correspondence to {gilmer, ghorbani}@google.com. A natural hypothesis is that methods which enable training at larger learning rates do so through reducing the sharpness1 of the loss surface during training. Indeed, this hypothesis has already been proposed as one of the beneficial effects of Batch Normalization (Ghorbani et al., 2019; Santurkar et al., 2018) and residual connections (Li et al., 2017), and quadratic models of the loss surface predict that optimization with SGD is unstable when λ1 > 2/η (Wu et al., 2018). However, recent empirical investigations into the relevance of quadratic stability bounds to neural network training have either focused on smaller models, focused on full batch training at small learning rates, and do not investigate connections between sharpness, model initialization and learning rate warmup (Cohen et al., 2021; Jastrzebski et al., 2020). In this work, we design a series of large scale experiments studying the evolution of the loss sharpness as we vary the learning rate, warmup period, initialization, and architectural choices. Our results demonstrate the central role that λ1 plays in neural network optimization—maintaining sufficiently small λ1 during optimization is a necessary condition for successful training at large learning rates. Consequently, reducing λ1 is a primary benefit of proper tuning of a number of architecture and optimization hyperparameters: including model initialization, location of normalization, and warmup schedule. Specifically, we show the following: • We provide large scale empirical confirmation that training of neural networks with SGD+momentum is stable only when the optimization trajectory primarily resides in a region of parameter space where λ1 . 2/η, where η denotes the learning rate. This corroborates the theoretical predictions of Wu et al. (2018) and recent empirical observations of Jastrzebski et al. (2020) and Cohen et al. (2021). • We demonstrate that several successful initialization strategies for architectures without normalization operate primarily by reducing curvature early in training, enabling training at larger learning rates. • We show that learning rate warmup gradually reduces λ1 during training, offering similar benefits to better model initialization. We connect the mechanism by which warmup operates to the dynamical stability model of Wu et al. (2018). • We show that learning rate warmup is a simple yet competitive baseline for research into better model initialization. We demonstrate that key progress in this area (Dauphin and Schoenholz, 2019; Zhang et al., 2019b; Zhu et al., 2021) can be matched by the application of learning rate warmup and/or gradient clipping alone. 1Throughout this work we will use the term sharpness to refer to the maximum eigenvalue of the loss Hessian, denoted as λ1. See Appendix B for more details. • Finally, we show that large loss curvature can result in poor scaling at large batch sizes and interventions designed to improve loss conditioning can drastically improve the model’s ability to leverage data parallelism. 2 RELATED WORK Understanding BatchNorm The loss Hessian has been a central object of study for understanding optimization of neural networks. Santurkar et al. (2018) argues that an important benefit of Batch Normalization is improved smoothness of the loss surface, while Lewkowycz et al. (2020) notes that this is improved smoothness is only observed when higher learning rates are used in combination with Batch Normalization. Our results are generally consistent with this current understanding of Batch Normalization, however some of our experiments provide additional nuance—notably we observe several instances where models suffer from training instability (and high loss curvature) early in training despite using Batch Normalization (see Section 4). Evolution of the loss Hessian Recent research has closely studied the interaction between sharpness and learning rate. Wu et al. (2018) provides a dynamical stability model which predicts that the loss curvature at convergence must satisfy2 λ1 ≤ 2/η. Recent work has provided empirical evidence that λ1 . 2/η often holds well before convergence (Cohen et al., 2021; Jastrzebski et al., 2020). Cohen et al. (2021) focused on full batch training at small learning rates, and observed “progressive sharpening”, where λ1 increases during training until λ1 ≈ 2.0/η. We observe the progressive sharpening phenomenon also occurs for many models trained with SGD, though we do not investigate batch sizes ≤ 8, where Cohen et al. (2021) argue that progressive sharpening does not occur. We note that Wu et al. (2018) equation 8 predicts the “edge of stability” is dependent on the batch size and that at small batch sizes this can be will below the 2/η bound. We confirm this prediction holds even early in training (see Appendix Figure 16). Lewkowycz et al. (2020) proves that for single hidden layer neural networks initialized at point with λ1 > 2.0/η and trained with an MSE loss may enter a “catapult” regime—where the loss increases early until a flatter region of the loss surface is found, with divergence occurring in cases where λ1 greatly exceeds 2.0/η. In contrast to the simplified setting considered in Lewkowycz et al. (2020), we find that divergence may occur even though λ1 2/η at initialization. 3 EXPERIMENTAL SETUP We investigate models trained on several benchmarks: CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) for image classification, LM1B (Chelba et al., 2013) for Language Modeling, and WMT for Neural Machine Translation (NMT). On CIFAR-10 we consider the WideResnet (Zagoruyko and Komodakis, 2016) and DenseNet (Huang et al., 2017) architectures, both with and without Batch Normalization. We consider two variants of the DenseNet architecture. The standard variant from the open sourced code of Zhu et al. (2021) is considered in Figure 5 and Table 1. A less stable variant changes the strides in the average pooling layers to (1,1) is used for Figure 2 and is denoted as Stride-(1,1) DenseNet (see Appendix D.1 for a more detailed discussion). When training without Batch Normalization we consider several initialization strategies including the default “LeCun Normal” initialization, and running MetaInit. As a way to artificially induce worse initializations, we also consider experiments where we scale every variable produced by the default initialization by a constant factor α. The NMT models are trained on the WMT’16 EN-DE training set, tuned for hyper-parameters on the WMT’16 EN-DE validation set and evaluated on the WMT’14 EN-DE test set for BLEU scores. For NMT and LM1B Language Modeling, we train 6 layer Transformer models (Vaswani et al., 2017). Inspired from Xiong et al. (2020), we experiment with three Layer Norm settings: pre-Layer Norm, post-Layer Norm (Liu et al., 2020) and no Layer Norm for the transformer models. Each model is trained with various learning rates using cosine decay (unless mentioned explicitly). For warmup experiments we use linear warmup which starts at 0 and scales linearly to a max value η before applying cosine decay. To measure the max eigenvalue of the loss Hessian we use the 2This is a simplified, potentially loose bound. See the original work for a more general bound that depends on both the loss curvature and the noise covariance matrix. Lanczos method where the number of iterations varied as needed depending on the architecture (details provided in the appendix). 4 EARLY TRAINING INSTABILITY AND THE LOSS HESSIAN In Figure 2 we plot the curvature at initialization and during training for a series of models trained on different datasets (plots showing final performance for all models can be found in the appendix). Each row indicates a different base model, the left column plots the curvature of the model at initialization and indicates with an ‘X’ whether or not the model diverges when trained without warmup. On the right we plot the measured curvature and learning rate at a specified point during training. We observe across all datasets that successful training occurs only when optimization enters a region of parameter space where λ1 ≤ 2/η, and that divergent models are outside this region shortly before divergence. At initialization, some models can be successfully trained even when they start out in the unstable region and generally speaking, divergence is more likely for models deeper in the unstable region. For CIFAR-10 WideResnet, removing batch norm results in a model with higher curvature at initialization and results in divergent models when trained with a learning rate η > .1. Scaling the WideResnet initialization up by a factor of 1.5 exacerbates the problem, resulting in even higher curvature at initialization and divergence when η > 10−2. MetaInit starts the model out at a point with very small λ1, and allows training without Batch Normalization at higher learning rates than the default initialization. We also observed that higher learning rates can be unlocked when the models are trained with learning rate warmup. Warmup was particularly effective for models which exhibit large λ1 either at initialization or early in training. Other models such as the post activation Resnet-50, and WideResnet w/ Batch Normalization did not benefit from warmup at the considered learning rates (see Appendix). For the Stride-(1,1) DenseNet experiments, it is noteworthy that the models with Batch Normalization actually start out with higher curvature than the non-BN variants. This is contrary to the generally accepted narrative that Batch Normalization improves the smoothness of the loss surface (Ghorbani et al., 2019; Santurkar et al., 2018). We found that the Batch Normalization models were more unstable than the non-BN variants here, as some models diverged at smaller learning rates. However, when combined with warmup the BN models were trainable at learning rates η > .1, whereas this did not hold for the non-BN variants, which diverge both with and without warmup at these learning rates. This result suggests that BN still offers training stability for this model, and flatter curvature mid training if trained with warmup and a higher learning rate, however no smoothness benefits are observed at initialization. See Appendix D.1 for more details on this phenomenon. For Resnet-50 trained on ImageNet we compare two different residual blocks: the preactivation block (He et al., 2016b) and the more commonly used post activation block (He et al., 2016a). For the preactivation block, we also consider flipping the order of the ReLU activation and batch normalization, as was considered in Brock et al. (2021). We find that both preactivation models start out in a region of higher curvature relative to the post activation variant, and that these models diverge when η > .5 whereas the post activation variant is trainable with learning rates as large at 10. Notably, there are several models in our experiments which diverge despite starting out in a region where λ1 < 2/η. This occurs for both the pre and post layernorm transformer, and the WideResnet model initialized with MetaInit. We found for these divergent models that the curvature rapidly increases in the initial steps of training, which is partially visible in the mid training plot where we plot the final observed curvature before divergence. Full training curves for these models can be found in the appendix. This implies that measuring λ1 at initialization is not always sufficient to predict whether or not the model will be easily trained. Currently, some architectural innovations are motivated by an analysis of either gradient statistics or smoothness at initialization (Liu et al., 2020)—a more robust analysis would consider the evolution of these statistics under SGD. 5 THE INTERACTION BETWEEN LEARNING RATE WARMUP, INITIALIZATION AND CURVATURE The success of learning rate warmup is inconsistent with conventional optimization wisdom, which traditionally suggests adapting the step size to the curvature (see for example the discussion around equation 2.4 in McCandlish et al. (2018)). However, with the understanding that λ1 is a dynamic quantity whose evolution is tightly coupled with the learning rate schedule, the benefits of a warmup period are more easily understood. We argue that the success of learning rate warmup follows naturally from two properties of training deep models: 1. Models diverge when the learning rate is too large relative to the 2/λ1 bound. 2. When the learning rate only slightly exceeds 2/λ1 optimization is unstable until the parameters move to a region with smaller λ1 (Wu et al., 2018; Lewkowycz et al., 2020). The first criteria implies that we can’t start η off at too large of a value relative to λ1 at initialization. The second criteria implies that gradually increasing η can gradually “push” the parameters to a region of parameter space where optimization is stable (with lower values of λ1). In Figure 4 there is clear evidence for this “pushing”, as during the warmup period the we see that λ1 ≈ 2.0/η holds for a large part of the warmup phase. Furthermore, this approximation holds even as we vary the length of the warmup period. Other examples can be seen in Figure 3 (B and F), and Figure 15 in the appendix. Warmup is not the only method capable of reducing λ1 during training, one can instead initialize the model in a region where λ1 starts off small. Consider for example, the points A, B and C in Figure 3. Each point shows optimization of a non-BN WideResnet with peak learning rate of .1. In (A) we see the model diverges within 3 steps without warmup using the default initialization. In (B) we see that a linear warmup period results in λ1 progressively decreasing until the peak step size of .1 is reached at step 1000, with no divergence occurring. Finally in (C) we initialize the same model with MetaInit, at which point λ1 is small at initialization, and the model can be trained at η = .1 without warmup. Similar to the aforementioned MetaInit, the success of related initialization strategies can be explained by reduced λ1 early in training. In Figure 5 (left) we look at the evolution of λ1 during the GradInit meta optimization process and compare this with simply training the same model using gradient clipping3. Both methods result in λ1 decreasing dramatically, after which λ1 hovers around 2/η. 3Similar to warmup, gradient clipping reduces the step size in regions of large curvature. Notably, GradInit starts regular training off at λ1 significantly below the 2/η bound, however the curvature quickly increases within a few steps. Given that initialization and warmup serve similar roles in reducing λ1, we expect to be able to achieve similar performance using the two methods. As shown in Table 1 we can easily match key advances in this field by applying learning rate warmup alone (see Appendix for experimental details). Beyond controlling λ1 mid-training, the learning rate η controls more general conditioning measures of the loss surface. For example, in Figure 5 we observe that even the MetaInit gradient quotient— the conditioning measure directly optimized by this initialization strategy—is controlled by η mid training. This again provides further evidence that the primary benefit of this initialization method is to reduce λ1 at initialization. As shown, any gains by optimizing the more general gradient quotient must be short lived as the initialization has no control over the long term value. 6 THE EFFECTS OF CURVATURE ON BATCH SIZE SCALING So far, we discussed how large loss curvature limits the range of stable learning rates for training. In this section, we highlight how these limits on usable learning rates affect the model’s ability to effectively leverage larger batch sizes. Previous research has studied the interplay of the loss curvature and batch size scaling from various different perspectives. Most notably, Shallue et al. (2018) observe that increasing the batch size yields consistent improvements in training speed until a (problem-dependent) critical batch size is reached; increasing the batch size beyond this threshold yields diminishing improvements in training speed. Zhang et al. (2019a) observe that a simple Noisy Quadratic Model (NQM) is able to capture some of the empirical behavior observed in Shallue et al. (2018). Similarly, McCandlish et al. (2018) use quadratic approximations to the loss to provide a closed form expression for the critical batch size as a function of the loss Hessian and the covariance of the stochastic gradient. We contribute to this literature by highlighting the role of λ1 in the batch size scaling behavior of the model. For this analysis, we focus on three of the WideResnet variants considered in Figure 2—the BatchNorm model (a low curvature model), the non BatchNorm model (with moderate curvature), and the non BatchNorm model with 1.5X init scaling (with high curvature). We train these models while sweeping both the learning rate and the batch size.4 We then measure the number of training steps required to reach 85% validation accuracy, and the optimal learning rate found for each batch size. Similar to Shallue et al. (2018), we normalize the plotted steps to 85% accuracy by the value measured at batch size 64. The results are shown in Figure 6. A few observations are in order: The low curvature model shows almost linear speedups in training speed as the batch size increases. In contrast, the high curvature model exhibits only minimal improvements in training speed with larger batch sizes. These scaling differences are closely mirrored by how the optimal learning rate η∗ changes with the batch size: for the low curvature model η∗ increases linearly with the batch size, while for the high curvature model η∗ is fixed around 3× 10−3. Notably, for the high curvature model η∗ is almost always the 4We sweep for the optimal learning rate on a log-scale grid between 10−3 and 1. For batch size, we sweep over powers of 2 from 16 to 4096. largest non-divergent value—a clear indication that the high loss curvature slows down training by preventing larger values from being used. A clear picture emerges from these observations. Previous research suggests that in order to effectively leverage larger batch sizes, one has to increase the learning rate in tandem with the batch size Jastrzębski et al. (2017); Goyal et al. (2017); Shallue et al. (2018); McCandlish et al. (2018). Our results suggest that large values of λ1 place a sharp limit on the maximum the learning rate possible and therefore, limit the model’s ability to leverage data parallelism effectively. 7 CONCLUSION Through extensive empirical experiments measuring the evolution of the loss sharpness during training, we have demonstrated how different methods such as initialization, learning rate warmup, and normalization all enable higher learning rates to be used (without causing divergence) by reducing λ1 during training. It is noteworthy that two of the most popular models we investigated (the popular post-activation variant of the Resnet-50 and the standard WideResnet 28-10) did not benefit from learning rate warmup, and exhibited small values of λ1 throughout training at the learning rates we considered. Thus researchers and practitioners who primarily work with well-optimized architectures might never notice a benefit from using warmup. However, even seemingly trivial modifications to a working architecture can easily result in large values of λ1 and thus instability early in training—a naive response to such a situation would be to dramatically reduce the learning rate or, even worse, abandon the modification being investigated all together. We hope the perspective presented in this work can help future researchers better navigate such situations, either through investigating different initializations, applying warmup and gradient clipping, or changing the location of normalization layers in the model. A LIMITATIONS Our analysis has focused primarily on models trained with SGD with momentum. This decision was motivated to reduce additional confounds that arise when using adaptive preconditioning. Notably, it is unclear what the analogue of λ1 ≤ 2/η should be for a model trained with Adam. In the appendix, we provide evidence that loss curvature adaption to the learning rate does occur even for Transformer models trained with Adam, and that learning rate warmup results in the similar effect of the optimization trajectory being “pushed” to flatter regions. However we leave a deeper analysis into this for future work. Finally, while our experiments certainly raise questions about the efficacy better model initialization has on further accelerating training, our measurements has focused primarily on the (lack of) influence initialization has on λ1 mid training. It is possible that better initializations could have lasting influence on the broader Hessian eigenspectrum (for example improving the ration λk/λ1 for smaller eigenvalues λk) and that our analysis is missing such an effect. B BRIEF REVIEW OF THE LOSS HESSIAN, EIGENVALUES AND QUADRATIC STABILITY BOUNDS For completeness, we include a formal definition of the fundamental mathematical quantities discussed in the paper. We derive most of this discussion from the relevant chapters in Horn and Johnson (2012) and Boyd et al. (2004). We refer the reader to these sources for a more detailed discussion. B.1 THE HESSIAN MATRIX The second derivative or Hessian matrix of the loss function L(·) at a point θ ∈ Rn is denoted by H(θ) ∈ Rn×n where ∀1 ≤ i, j ≤ n H(θ)i,j = ∂2L(θ) ∂θi∂θj . (1) Moreover, by Schwarz’s theorem, if the second partial derivatives of L(·) are continuous at θ, the matrix H(θ) is symmetric. This is a broad condition that holds for all the loss surfaces we examine in the main text (beyond a set of measure zero). B.2 EIGENVALUES Definition 1 Let A ∈ Rn×n. If a scalar λ and a nonzero vector x satisfy Ax = λx, λ ∈ C, x ∈ Cn, x 6= 0 (2) then λ is called an eigenvalue of A and x is called an eigenvector of A associated with λ. If A is a symmetric real matrix (such as the Hessian matrix), A can be factored as A = QΛQT , (3) where Q ∈ Rn×n is an orthogonal matrix and Λ = diag(λ1, . . . , λn) ∈ Rn×n is a real diagonal matrix. Here, {λi}ni=1 are all of the eigenvalues of A. We order λi such that λ1 ≥ λ2 · · · ≥ λn. In this ordering, λ1 corresponds to the maximum eigenvalue of A and λn corresponds to its minimum eigenvalue. The maximum and minimum eigenvalues of A satisfy the following important properties: λ1 = sup x6=0 xTAx xTx , λn = inf x 6=0 xTAx xTx . (4) In particular, for any x ∈ Rn, we have λn‖x‖22 ≤ xTAx ≤ λ1‖x‖22. B.3 STABILITY OF GRADIENT DESCENT FOR QUADRATIC LOSS Now that we have established the basics, let’s derive the stability condition for GD applied to a quadratic loss function. Note that Wu et al. (2018) and Cohen et al. (2021) provide more general bounds for the stability of SGD-type optimization algorithms. Here, we state & derive the stability condition for GD for the sake of completeness. Let L(θ) = 1 2 θTHθ, where H is a symmetric matrix with non-negative eigenvalues. Let’s consider GD dynamics starting from a random point θ0. Under GD with a fixed step-size η > 0, we have θt+1 = θt − η∇L(θt) = θt − ηHθt = (I − ηH)θt. Continuing this iteration to step 0 yields θt = (I − ηH)tθ0. (5) As t→∞, this iteration is stable iff5 the eigenvalues of (I − ηH) have absolute magnitude bounded by one, which can be stated equivalently as 1− ηλ1 ≥ −1⇐⇒ 2 ≥ λ1η ⇐⇒ 2 η ≥ λ1, which is the exact condition discussed and explored in the main text. 5As θ0 was randomly chosen, we assume it has a non-zero overlap with all eigenvectors. C MISCELLANEOUS FIGURES D PERFORMANCE OF MODELS IN FIGURE 2 In this this section, we plot the performance vs learning rate for all of the models shown in Figure 2 of the main text. These are shown in Figures 9, 13, 10, and 11. For models which diverged, we plot the best test performance achieved before divergence. In all settings, high curvature affects the final performance by limiting the use of higher learning rates. We also noted several models in Figure 2 which diverged despite training starting out in a stable region of parameter space. In Figure 12 we plot the evolution of the loss sharpness during training, showing that it quickly enters a region where λ1 > 2.0/η before diverging around step 90. D.1 DISCUSSION OF STRIDE-(1,1) DENSENET EXPERIMENTS In this section we discuss in more detail the Stride-(1,1) DenseNet experiments shown in Figure 2 in the main text. These experiments use a non-standard version of the DenseNet architecture where all average pooing strides are set to (1,1). Note the experiments in Figrue 5 and Table 1 instead use the standard strides implementation from the open sourced code of Zhu et al. (2021). The Stride-(1,1) DenseNet architecture is noteworthy because it is a counter example to common intuition that adding Batch Normalization results in flatter curvature. As shown in Figure 2 (left), the BN variants all have high curvature at initialization, however the right hand side plot shows that the mid training curvature becomes comparable to the non-BN variants. In Figure 13 we provide a more detailed analysis to understand what is happening with BN. First we plot the performance of the BN vs non-BN models both with and without warmup. The differences are striking. Without warmup, we see the BN performance is highly stochastic, some trials outperform the non-BN variants, while some trials underperform the non-BN variants. However, when trained with 1000 steps of warmup the BN variants now significantly outperform the non-BN models are all considered learning rates. They can even be successfully trained at higher learning rates than the non-BN variants, despite the high initial curvature. To provide further detail, we show the training curves of select individual runs, both the evolution of the training loss and the evolution of curvature. The BN variants all exhibit catapult behavior early—the loss increases initially until the parameters enter a region of flatter curvature. Warmup helps the BN variants, and significantly reduces the severity of the catapult phase while enabling faster long term training. Additionally, when we add warmup we find that the BN variants can now be trained at higher learning rates than the non-BN variant. As shown at learning rate of .22, the non-BN model diverges during the warmup phase despite lower initial curvature. Based on these experiments we arrive at the following conclusions. First, adding BN to the Stride(1,1) DenseNet architecture results in high curvature at initialization, which results in a short period of instability during training. However, once the parameters escape this region of large curvature, the BN variant exhibits favorable training dynamics relative to the non-BN variants. Thus, there still seems to be benefit to adding BN, assuming steps are taken to mitigate the initial period of high curvature. The fact that adding BN to a model can result in high initial curvature is not without precedent, as Yang et al. (2019) observe that adding BN to deep fully connected networks can result in exploding gradients at initialization. These experiments highlight one of the primary takeaways of this work: that maintaining flat curvature throughout training is a necessary (not sufficient) condition for stable training of neural networks. Thus it is not the presence of BN that is necessary for stable training, instead BN is generally a useful tool for reducing curvature (and thus stabilizing training). BN has clear benefits of improving curvature in most cases, but it is possible to produce configurations where adding BN paradoxically results in higher initial curvature than the non-BN variant. In these cases training is initially unstable, but once the curvature is reduced we see benefits later in training of using BN. E DETAILS ON COMPUTING THE HESSIAN EIGENSPECTRUM VIA LANCZOS We use Lanczos iterations to estimate the top eigenvalue of the Hessian. Lanczos algorithm only requires Hessian-vector products which can be efficiently computed via Pearlmutter’s trick Pearlmutter (1994). Previous research has demonstrated that this approach provides a robust and scalable framework to examine the eigenvalues of the Hessian for large neural networks Ghorbani et al. (2019); Papyan (2018). For our WMT / LM1B experiments, we run the algorithm for 45 steps while for image models, we use 40 steps. When monitoring the evolution of the top eigenvalue as a function of the number of Lanczos steps, in all cases except one, we observe that the algorithm converges. For the case of Resnet with ReLU→BN ordering, due to a very small eigengap between the top eigenvalue and the bulk, the convergence is significantly slower. We use 200 Lanczos steps in this case to alleviate the issue. For this model, estimating λ1 via power iteration (as is commonly done in the deep learning literature) will incorrectly the largest negative eigenvalue, not λ1 as desired. It is well-known that Lanczos algorithm can suffer from numerical instabilities caused by finiteprecision arithmetic. To alleviate these issues, Lanczos vectors are stored in float64 accuracy and we perform reorthogonalized at each step of the algorithm. F TRAINING DETAILS FOR TABLE 1 F.1 NEURAL MACHINE TRANSLATION Neural Machine Translation experiments are based on the Transformer models (Vaswani et al., 2017). We use separate embeddings on encoder and decoder, and a common word piece vocabulary of size 32000. For depth, we use 6 layers on both encoder and decoder. For width, we experiment with two models, namely Transformer-Base and Transformer-Wide. For Transformer-Base, we use word embeddings with 512 dimensions, 8 heads and 2048 feed-forward dimension. For TransformerWide we use word embeddings with 1024 dimensions, 16 heads and 4096 feed-forward dimension. The experiments reported in Figures 3 and 5 use Transformer-Base. The experiments reported in Table 1 use Transformer-Wide models trained with Adam (Kingma and Ba, 2014). We sweep over warm-up, learning rate, gradient clipping and init_scaling and optimize for validation loss to evaluate performance on test set BLEU reported in Table 1. All the models are trained for 60 epochs at batch size of 1024 for Transformer-Base models, and batch size of 512 for Transformer-Big models. We use dropout of 0.1, label smoothing of 0.1 and no weight decay for all these models. F.2 DENSENETS In Table 1 the ResNet-50 (w/o BN) architecture was trained for 100 epochs at batch size 512, with l2 regularization of 5e-5, dropout of .3. It was trained with SGD with nesterov momentum of .9 and learning rate of .2. We applied gradient clipping at global l2 norm of 5 and used linear learning rate warmup with warmup period of 1000 steps. For Table 1, the DenseNet-100 model was trained using the Gradinit codebase 6 by modifying the supplied DenseNet script to apply gradient clipping of norm 6 and to use the default initialization instead of GradInit. G TRAINING DETAILS FOR FIGURE 2 The WideResnet-28-10 models were trained with batch size of 1024 for 300 epochs. We applied the MixUp augmentation(Zhang et al., 2017). For learning rate warmup we used 1000 steps of linear warmup until the peak learning rate is achieved, at which point the learning rate is decayed according to the cosine schedule. The Stride-(1,1) DenseNet models were trained with batch size of 512 using the SGD optimizer with momentum of 0.9, weight decay of 5e-4, L2 regularization of 1e-4 and warmup of 1000 steps (for the models where warmup is used) followed by cosine decay. The models were trained for 200 epochs. For the DenseNet architecture we used growth_rate of 32 and reduction of 0.5. The Resnet-50 models were trained with batch size of 2048 using the SGD optimizer with nesterov momentum of .9. The learning rate schedule was the same as in the WideResnet case, with linear warmup of 1000 steps followed by cosine decay. We applied label smoothing of .1 and used the standard flip plus crop for data augmentation. The Transformer models on LM1B were trained at batch size 1024 using SGD with nesterov momentum of .9. We use embedding dimension of 512, 6 layers with 8 heads and MLP hidden dimension of 1024. The attention dropout rate was .1. The learning rate schedule followed the same recipe as in the Resnet cases. H CURVATURE ADAPTATION WITH THE ADAM OPTIMIZER The discussion in the main text focused primarily on models trained with SGD and momentum. In this appendix, we briefly examine if similar conclusions hold for optimizers such as Adam that use preconditioning. It is unclear a priori whether or not curvature adapation to the learning rate should occur for optimizers which apply preconditioning. However, given that Adam is a diagonal preconditioner applied to a non-diagonal Hessian, there may be some similar effects observed. 6https://github.com/zhuchen03/gradinit Consider a simple quadratic loss L(θ) = 1 2 θTHθ, H 0. where optimization is performed via preconditioned gradient descent with a fixed diagonal preconditioning matrix D: θt = θt−1 − ηD−1∇L(θt−1) = θt−1 − ηD−1 ( Hθt−1 ) = ( I − ηD−1H ) θt−1 = ( I − ηD−1H )t θ0 As such, this simple model would suggest that the max eigenvalue of the following matrix may be related to training instability of models trained with Adam λmax(D −1H) = λmax(D −1/2HD−1/2). (6) While (6) does not take into the account the effects of adaptive preconditioning or momentum, we find some empirical evidence that this approximation provides understanding into the stability of the optimization. Figure 14 below examines the evolution of λmax(D−1/2HD−1/2) for three Transformer models trained with Adam and different warm-up lengths. Here, D is a diagonal matrix with Di,i =√ Corrected Adam grad squared EMA + Adam . We observe that, similar to the models trained with momentum, the maximum (preconditioned) Hessian eigenvalue adapts to the warm-up schedule (green and red markers). We notice that –perhaps due to the effect of momentum or adaptive preconditioning – the threshold 2/η does not seem to be aligning with the data well. Instead, an empirically corrected threshold 40/η seems to fit the data better. We observe that instabilities in model training coincide exactly with λmax(D−1/2HD−1/2) crossing the empirically corrected threshold. These observations suggest that some of the insights discussed in the main text seem to carry over to the case of adaptive optimizers. We leave further exploration of this more complex setting to future work. I COMPUTE RESOURCES USED Nearly all experiments utilized the Google Cloud Platform with v2 cloud TPUs except for the following: The Figure 2 Resnet-50 and Stride-(1,1) Densenet experiments utilized the v3 cloud TPU, while the GradInit code was run on a cloud machine with a single V100 GPU. The Figure 2 experiments were done in parallel using up to 50 v2 TPUs concurrently over the period of a few days. Additionally, all the Machine Translation models were trained on v3 cloud TPUs.
1. What is the focus of the paper regarding experimental results? 2. What are the strengths of the review, particularly in providing coherent insights? 3. Do you have any concerns or questions about the presented results, especially regarding their implications? 4. How do you assess the clarity and quality of the review's content?
Summary Of The Paper Review
Summary Of The Paper This paper presents a comprehensive set of experiments showing how the maximal learning rate to train modern architectures depends on various algorithmic choices. Review Comments : Results are well presented and yield a rather coherent picture. In particular, I like the intuition that warmup brings to a region of smaller curvature, which fits in well with the catapult mechanism. I also appreciate the batch size scaling experiments, which go against the idea that the only quantity that matters is the ratio of learning rate / batch size. The observation that lambda_1 follows 2/lr in presence of warmup could be better connected to Cohen et al. with a sentence such as “Cohen et al. showed that lambda_1 increases up to 2/lr then plateaus at the edge of stability, in the constant lr setup : we show that this holds even in presence of scheduling” The figures are extremely long to process for the various PDF readers I tried. This is likely due to an excessive number of datapoints in the logscale figures (happened to me in the past) : consider subsampling the data at large values. Questions : Fig. 3 : Could one try to understand why some of the models diverge even though eta<2/lambda_1 ? Fig. 3 : Multiplying the init variance by 1.5 seems to increase the sharpness by a factor between 5 and 100. Why such a variability ? Can one predict this factor from the number of layers in the model ? If increasing init variance increases sharpness, couldn’t one simply reduce the sharpness by reducing init variance, hence enabling larger learning rates ? This could be connected to the question of lazy vs feature learning regimes
ICLR
Title Occlusion resistant learning of intuitive physics from videos Abstract To reach human performance on complex tasks, a key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation. This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences. Yet, most these methods are restricted to the case where no occlusions occur, narrowing the potential areas of application. The main contribution of this paper is a method combining a predictor of object dynamics and a neural renderer efficiently predicting future trajectories and explicitly modelling partial and full occlusions among objects. We present a training procedure enabling learning intuitive physics directly from the input videos containing segmentation masks of objects and their depth. Our results show that our model learns object dynamics despite significant inter-object occlusions, and realistically predicts segmentation masks up to 30 frames in the future. We study model performance for increasing levels of occlusions, and compare results to previous work on the tasks of future prediction and object following. We also show results on predicting motion of objects in real videos and demonstrate significant improvements over state-of-the-art on the object permanence task in the intuitive physics benchmark of Riochet et al. (2018). 1 Introduction Learning intuitive physics has recently raised significant interest in the machine learning literature. To reach human performance on complex visual tasks, artificial systems need to understand the world in terms of macroscopic objects, movements, interactions, etc. Infant development experiments show that young infants quickly acquire an intuitive grasp of how objects interact in the world, and that they use these intuitions for prediction and action planning (Carey, 2009; Baillargeon & Carey, 2012). This includes the notions of gravity (Carey, 2009), continuity of trajectories (Spelke et al., 1995), collisions (Saxe & Carey, 2006), etc. Object permanence, the fact that an object continues to exist when it is occluded, (Kellman & Spelke, 1983), is one of the first concepts developed by infants. From a modeling point of view, the key scientific question is how to develop general-purpose methods that can make physical predictions in noisy environments, where many variables of the system are unknown. A model that could mimic even some of infant’s ability to predict the dynamics of objects and their interactions would be a significant advancement in model-based action planning for robotics (Agrawal et al., 2016; Finn & Levine, 2017). Importantly, to be applied to real-world problems, such a model needs to predict object motion in 3D and handle frequent inter-object occlusions. Yet, to our knowledge, most current works on learning intuitive physics get around this challenge by either i) working in 2D spaces with no occlusions (Battaglia et al., 2016; Chang et al., 2016; Fragkiadaki et al., 2015) or ii) learning end-to-end models without decomposing the scene into objects (Agrawal et al., 2016; Lerer et al., 2016; Finn et al., 2016). The former methods have demonstrated that learning models of intuitive physics is possible but assume ground truth positions of objects are available at both training and test time. The latter methods can operate directly on pixel inputs without knowing ground truth positions of objects but are typically limited to a small number of objects and generalize poorly to new setups (e.g. a new number of objects in the scene, see (Lerer et al., 2016)). A third class of methods has recently emerged (Janner et al., 2018) that first decomposes the input image of the 3D scene into layers corresponding to masks of individual objects and learns scene dynamics given such object-centric decomposition. Note that here the object dynamics is learnt from pixel masks of individual objects, rather than their ground truth positions. This is difficult for 3D scenes due to frequent inter-object occlusions that present two major challenges. First, estimating accurate position and velocity of objects is challenging due to partial occlusions by other objects. Second, objects can be fully occluded by other objects for a significant number of frames. This work falls into the third class of compositional methods, but develops an occlusion resistant model for learning intuitive physics that addresses both of these challenges due to inter-object occlusions. In detail we propose a compositional model that from object instance masks and depth fields in two consecutive frames, (Mt,t+1, Dt,t+1), estimates the center, velocity and size of objects. This predicted state ŝt is then used as input of a Recurrent Interaction Network, which predicts a sequence of futures states ŝt+2, ..., ŝt+L. This sequence of states is given to the Compositional Rendering Network which produces segmentation masks M̂t+2, ..., M̂t+L and depth estimates D̂t+2, ..., D̂t+L in future frames. The key innovation of the proposed model is dealing with partial and complete occlusions in the scene. To deal with partial occlusions, the obtained sequence of masks+depths is compared to the ground truth, and gradients are backpropagated through the pre-trained Compositional Rendering Network to refine state predictions. This allows us to refine positions of partially occluded objects where simply taking the centroid of the observed portion of the mask results in an incorrect estimate of the object position. With this refinement object positions are corrected taking into account the unobserved (occluded) portion of the object. The refined state estimates s̄t+1, ..., s̄t+L are used at training time for learning parameters of the Recurrent Interaction Network and at test time to improve accuracy of object position prediction when following partially occluded objects. To deal with full occlusions, when the object is not visible in multiple frames, we use the learnt model of object dynamics (Recurrent Interaction Network) to predict the position of the object multiple frames ahead and thus recovering the object position after the occlusion. Using the proposed approach, we show that it is possible to learn object dynamics in 3D environments with severe inter-object occlusions and predict segmentation masks up to 30 frames in the future despite occlusion other objects thus mimicking object permanence. 2 Related work Forward modelling in videos. Forward modelling in video has been studied for action planning (Ebert et al., 2018; Finn et al., 2016) and as a scheme for unsupervised learning of visual features (Lan et al., 2014; Mathieu et al., 2015). In that setup, a model is given a sequence of frames and has to generate frames in future time steps. To succeed in this task, such models need to predict object movements, suggesting that they need to learn physical regularities from video. However, models for end-to-end future frame prediction tend to perform poorly on long-term prediction tasks (say more 5-8 frames (Lan et al., 2014; Mathieu et al., 2015; Finn et al., 2016)), failing to preserve object properties and generating blurry outputs. This suggests that models for intuitive physics may require a more structured representation of objects and their interactions. Learning dynamics of objects. Longer term predictions can be more successful when done on the level of trajectories of individual objects. For example, in (Wu et al., 2017b), the authors propose "scene de-rendering", a system that builds an object-based, structured representation from a static (synthetic) image. The recovered state can be further used for physical reasoning and future prediction using a physics engine on both synthetic and real data (Battaglia et al., 2013; Wu et al., 2017a). Future prediction from static image is often multi-modal (e.g. car can move forward or backward) and hence models able to predict multiple possible future predictions, e.g. based on variational auto-encoders (Xue et al., 2016), are needed. Others have developed structured models that factor object motion and object rendering into two learnable modules. Examples include (Watters et al., 2017; Marco Fraccaro, 2017; Ehrhardt et al., 2017b;a) that combine object-centric dynamic models and visual encoders. Such models parse each frame into a set of object state representations, which are used as input of a "dynamic" model, predicting object motion. However, (Marco Fraccaro, 2017) restrict drastically the complexity of the visual input by working on binary 32x32 frames, and (Ehrhardt et al., 2017b;a; Watters et al., 2017) still need ground truth position of objects to train their models. None of these work explicitly models inter-object occlusions, which is the focus of our method. In our work, we build on learnable models of object dynamics (Battaglia et al., 2016) and (Chang et al., 2016), which have the key property that they are compositional and hence can model a variable number of objects, but extend them to learn from visual input rather than ground truth object state vectors. Our work is related to (Janner et al., 2018), done independently and concurrently with our work, who develop an object-oriented model of dynamics coupled with a differentiable object renderer to predict a single image with segmentation masks of objects in a future time, given a single still image as input. In contrast, our model predicts frame-by-frame object motion in scenes with partial and full object occlusion. This is possible because (i) our model of dynamics is recursive, predicting a whole sequence of object movements (instead of one single image in future (Janner et al., 2018)) that allows the model to be applied recursively to follow an object through complete occlusion by other objects; (ii) we design a refinement procedure that allows to refine the estimated positions of objects in case of partial occlusions. In addition, in contrast to (Janner et al., 2018) our model predicts velocity of objects and depth of the scene (also taking as input a pair of frames and the depth field). Others have proposed unsupervised methods to discover objects and their interactions in 2d videos (van Steenkiste et al., 2018). It is also possible to construct Hierarchical Relation Networks (Mrowca et al., 2018), representing objects as graphs and predicting interactions between pairs of objects. However, this task is still challenging and requires full supervision in the form of ground truth position and velocity of objects. Learning physical properties from visual inputs. Related are also methods for learning physical properties of objects. Learning of physical properties, such as mass, volume or coefficients of friction and restitution, has been considered in (Wu et al., 2016). Others have looked at predicting the stability and/or the dynamics of towers of blocks (Lerer et al., 2016; Zhang et al., 2016; Li et al., 2016a;b; Mirza et al., 2017; Groth et al., 2018). Our work is complementary. We don’t consider prediction of physical properties but focus on learning models of object dynamics handling inter-object occlusions at both training and test time. (Greff et al., 2019) Contributions. We describe a model that learns complex dynamics of objects in a 3D environment, where inter-object occlusions occur frequently. Our model combines an abstract representation of the scene (position, velocity and depth of objects), with a compositional neural renderer predicting the resulting object masks with depth and explicitly modelling occlusions between objects. This procedure allows us to train the model even when some objects are partially or totally occluded. Unlike (Watters et al., 2017), our model is fully compositional and handles variable number of objects in the scene. Moreover, it does not require as input annotated inter-frame correspondences during training. 3 Occlusion resistant modeling for intuitive physics This section describes our model for occlusion resistant learning of intuitive physics. We first describe the learning set-up considered in this work. We then describe in detail the two main components of our model. In section 3.2 we outline the compositional renderer with occlusion reasoning that predicts object masks given a scene state representation, and in section 3.3 we detail the recurrent interaction network that predicts the scene state evolution over time. Finally, in section 3.4 we outline the training procedure. 3.1 Set-up overview As illustrated in Figure 1 (and Algorithm in the Supplementary Material), during learning our method observes a sequence of object instance masks and depth fields Mt,..,t+L, Dt,..,t+L. The mask for each frame is composed of a set of channels where each channel represents pixels corresponding to an individual object, along with their color and shape (boxes or balls of different sizes). The model does not require the knowledge of correspondence between objects over time, which might be difficult to obtain in practice. Our model is composed of two networks described below: a pre-trained occlusion sensitive Compositional Rendering Network (Renderer) which renders masks and depth fields given a set of object positions (also called states), and a trainable Recurrent Interaction Network (RecIntNet) which predicts positions of objects in future frames. 3.2 Occlusion modeling: the Compositional Rendering Network For each pixels: [x,y] MLP xy [px,py, d,c] Intput objects (3 hidden layers) Occlusion predictor Bilinear interpolation x2 3 x (Convolution 3x3) Lmask Ldepth S Object Renderer Object mask Object depth Scene mask Scene depth coordinates (xk, yk, dk) of object k in a frame together with additional dimensions for intrinsic object properties (shape, color and size) (c). The network predicts object’s binary mask, Mk as well as the depth map Dk. The input vector (xk, yk, dk, ck) ∈ Rl is first copied into a (l+2)×16×16 tensor, where each 16×16 cell position contains an identical copy of the input vector together with x and y coordinates of the cell. Adding the x and y coordinates may seem redundant, but this kind of position field enables a very local computation of the shape of the object and avoids a large number of network parameters (similar architectures were recently also studied in (?)). The input tensor is processed with 1 × 1 convolution filters. The resulting 16-channel feature map is further processed by three blocks of convolutions. Each block contains three convolutions with filters of size 1× 1, 3× 3 and 1× 1 respectively, and 4, 4 and 16 feature maps, respectively. We use ReLU pre-activation before each convolution, and up-sample (scale of 2 and bilinear interpolation) feature maps between blocks. The last convolution outputs N + 1 feature maps of size 128 × 128, the first feature map encoding depth and the N last feature maps encoding mask predictions for the individual objects. The object rendering network is applied to all objects present, resulting in a set of masks and depth maps denoted as {(M̂k, D̂k), k = 1..N}. The Occlusion predictor takes as input the masks and depth maps for N objects and aggregates them to construct the final occlusion-consistent mask and depth map. To do so it computes, for each pixel i, j ≤ 128 and object k the following weight: cki,j = eλD̂ k i,j∑N q=1 e λD̂qi,j , k = 1..N, (1) where λ is a parameter learned by the model. The final masks and depth maps are computed as a weighted combination of masks M̂ki,j and depth maps D̂ki,j for individual objects k: M̂i,j = ∑N k=1 c k i,jM̂ k i,j , D̂i,j = ∑N k=1 c k i,jD̂ k i,j , where i, j are output pixel coordinates ∀i, j ≤ 128 and cki,j the weights given by (1). The intuition is that the occlusion renderer constructs the final output (M̂, D̂) by selecting, for every pixel, the mask with minimal depth (corresponding to the object occluding all other objects). For negative values of λ equation (1) is as a softmin, that selects for every pixel the object with minimal predicted depth. Because λ is a trainable parameter, gradient descent forces it to take large negative values, ensuring good occlusion predictions. Also note that this model does not require to be supervised by the depth field to predict occlusions correctly. In this case, the object rendering network still predicts a feature map D̂ that is not equal to the depth anymore but is rather an abstract quantity that preserves the relative order of objects in the view. This allows Renderer to predict occlusions when the target masks are RGB only. However, it still needs depth information about in the input (either true depth or relative ordering). 3.3 Dynamics prediction: the Recurrent Interaction Network (RecIntNet) To model object dynamics, we build on the Interaction Network (Battaglia et al., 2016), which predicts dynamics of a variable number of objects by modelling their pairwise interactions. Here we describe three extensions of the vanilla Interaction Network model. First, we extend the Interaction Network to model 2.5D scenes where position and velocity have a depth component. Second, we extend the Interaction Network to train from the whole sequence of future states and call this new model Recurrent Interaction Network. Third, we introduce variance in the position predictions, to stabilise the learning phase, and avoid penalizing too much very encertain predictions. The three extensions are described below. Modelling compositional object dynamics in 2.5D scenes. As shown in (Battaglia et al., 2016), Interaction Networks can be used to predict object motion both in 3D or in 2D space. Given a list of objects represented by their positions, velocities and size in the Cartesian plane, an Interaction Network models interactions between all pairs of objects, aggregates them over the image and predicts the resulting motion for each object. Here, we model object interactions in 2.5D space, since we have no access to the object position and velocity in the Cartesian space. Instead we have locations and velocities in the image plane plus depth (the distance between the objects and the camera). Training from a sequence of future frames. The vanilla Interaction Network (Battaglia et al., 2016) is trained to predict position and velocity of each object in one step into the future. Here, we learn from multiple future frames. In detail, we "rollout" the Interaction Network to predict a whole sequence of future states as if a standard Interaction Network was applied in recurrent manner. We found that faster training can be achieved by directly predicting changes in the velocity, hence: [p1, v1, c] = [p0 + δtv0 + δt2 2 dv, v0 + dv, c], (2) where p1 and v1 are position and velocity of the object at time t1, p0 and v0 are position and velocity at time t0, and δt = t1 − t0 is the time step. Position and velocity in pixel space (p = [px, py, d] where px, py are the position of the object in the frame), d is depth and v is the velocity in that space. Hence dv can be seen as the acceleration, and (v0 + dv),(p0 + δtv0 + δt2 2 dv) as the first and second order Taylor approximations of velocity and position, respectively. Assuming an initial weight distribution close to zero, this gives the model a prior that the object motion is linear. Prediction uncertainty. To account for prediction uncertainty and stabilize learning, we assume that object position follows a multivariate normal distribution, with diagonal covariance matrix. Each term σ2x, σ2y, σ2d of the covariance matrix represents the uncertainty in prediction, along x-axis, y-axis and depth. Such uncertainty is also given as input to the model, to account for uncertainty either in object detection (first prediction step) or in the recurrent object state prediction. The resulting loss is negative log-likelihood of the target p1 w.r.t. the multivariate normal distribution, which reduces to L ( (p̂1, τ̂1), p1 ) = (p̂1 − p1)2 exp τ̂1 + τ̂1, (3) where τ̂1 = log σ̂21 is the estimated level of noise propagated through the Recurrent Interaction Network, where σ1 concatenates σ2x, σ2y, σ2d, p1 is the ground truth state and p̂1 is the predicted state at time t+ 1. The intuition is that the squared error term in the numerator is weighted by the estimated level of noise τ̂1, which acts also as an additional regularizer. 3.4 System Training In this section we give a high level description of the procedure for training our model (details are in the supplementary Section S3). The Compositional rendering Network is pre-trained offline from masks and depths in individual frames. Training of the Recurrent Interaction Network is done in the following three steps. First (initialization phase), we select from each training video short clips containing L frames (here, we use L=10). In each frame, we estimate position, depth and size of each object by computing the centroid of each mask, its average depth and diameter (max distance between two mask pixels). To correct errors due to partial occlusions, we perform occlusion-aware refinement of object positions using gradient descent through the pre-trained Renderer (see supplementary material). The result is a partial state vector (no velocities) for each frame, corrected for partial occlusions. Spatially close objects in two consecutive frames are linked and considered the same objects. In a second step (prediction phase), we use the Recurrent Interaction Network to roll out L− 2 predictions for the position of these objects in future frames starting from Frame 2. Frame 1 is used to compute the initial velocities of Frame 2. In a third step, (update phase), we use the distance between the ground truth positions established in step 1 and the rollout positions to perform the training of the Recurrent Interaction Network. 4 Experiments In this section we demonstrate the ability of our model to learn intuitive physics in presence of inter-object occlusions. We evaluate our model on two task: (i) future prediction, predicting objects’ trajectories up to a horizon of 10 frames and (ii) object following, coupling the dynamics network with the neural renderer to follow objects under occlusions, up to a horizon of 30 frames. In future prediction, we initialize the network with two frames, which enable the computation of object positions and velocities based on the instance masks as in the training phase. We then run a roll-out for N consecutive frames with the interaction network. We evaluate this rollout by comparing the predicted positions or reconstructed pixels with the ground truth. In object following, we alternate between short-term rollout (using the interaction network to predict the next frame) and object position refinement (using the renderer). This allows us to put an index on each object, and follow them through large periods of occlusions. During full occlusion, the position is solely determined by the interaction network, since the object position refinement has a zero gradient. During full or partial occlusion, object position refinement is used to reconstruct a better estimate of the positions and velocities. To test object following, we measure the accuracy of the position estimates across long sequences containing occlusions. We also evaluate the ability to detect the violation of object permanence (objects disappearing or appearing out of nowhere). Our evaluation is mostly based on a synthetic dataset, which we release for this paper. We also study generalization to real scenes, and compare to baseline models on the object permanance subset of the intuitive physics benchmark (Riochet et al., 2018). 4.1 Evaluating Future Prediction We use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five datasets, where we vary the camera tilt and the presence of occluders. In the first dataset (“Top view") we record videos with a top camera view (or 90◦), where the borders of the frame coincide with the walls of the box. In the second dataset (“Top view+occ"), we add a large moving object occluding 25% of the scene. Finally, we decrease the camera viewing angle to 45◦, 25◦ and 15◦ degrees, which results in an increasing amount of inter-object object occlusions due to perspective projection of the 3D scene onto a 2D image plane. We computed the proportion of time each object is occluded or partially occluded and found 3.1% in the top-view videos, 31.1% in the top view occluded videos, and 5.9%, 11.7%, 13.4% in the 45◦, 25◦, 15◦ tilted videos, respectively. Additional details of the datasets are given in the supplementary material. Inter-object occlusion investigation. In this section we consider prediction horizons of 5 and 10 frames, and evaluate the position error as a L2 distance between the predicted and target object positions. L2 distance is computed in the 3D Cartesian scene coordinates, such that results are comparable across different camera tilts. Results are shown in Table 1. We first note that our model trained on mask and depth prediction significantly outperforms the linear baseline, which is computed as an extrapolation of the position of objects based on their initial velocities. Moreover, the results of our method are relatively stable across challenging setups with occlusions by external objects or frequent self-occlusions in tilted views. This demonstrates the potential ability of our method to be trained from real videos where occlusions and other factors usually prevent reliable recovery of object states. Ablation Studies. As an ablation study we replace the Recurrent Interaction Network (RecIntNet) in our model with a multi-layer perceptron. This MLP contains four hidden layers of size 180 and is trained the same way as RecIntNet, modelling acceleration as described in equation 3.3. To deal with the varying number of objects in the dataset, we pad the inputs with zeros. We observe that RecIntNet allows more robust predictions through time. As a second ablation study, we train the Recurrent Interaction Network without modelling acceleration (3.3). This is similar to the model described in (Janner et al., 2018), where object representation is not decomposed into position / velocity / intrinsic properties, but is 1https://pypi.org/project/pybullet rather a (unstructured) 256-dimensional vector. We observe a significant loss in performance, tending to confirm that modelling position and velocity explicity, and having a constant velocity prior on motion (given by 3.3) improves future predictions. As a third ablation study, we train a deterministic variant of RecIntNet, where only the sequence of states is predicted, without the uncertainty term τ . The loss considered is the mean squared error between the predicted and the observation state. Observed results are slightly worse than our model handling uncertainty (see NoProba-RIN), but close enough to say that this is not a key feature for modelling 5 or 10 frames in the future. In qualitative experiments, however, we observed more robust long-term predictions after introducing the uncertainty term τ in the model and the loss (equation 3). For the purpose of comparison, we also evaluate three models trained using ground truth object states. Our Recurrent Interaction Network trained on ground truth object states gives similar results to the model of (Battaglia et al., 2016). As expected, training on ground truth states (effectively ignoring occlusions and other effects) performs better than training from object masks and depth. We also compare with CNN autoencoder (Riochet et al., 2018), showing our models gives better forward mask and depth predictions than CNN auto-encoders trained end-to-end. Full results are given in the supplementary material. Generalization to real scenes. We construct a dataset of 22 real videos, containing a variable number of colored balls and blocks in motion. Videos are recorded with a Microsoft kinect2 device, including RGB and depth frames. The setup is similar to the one generated with pybullet, recorded with a top camera view and containing 4 balls and a variable number of blocs (from 0 to 3). Here again, the borders of the frame coincide with the walls of the box. Taking as input object segmentation of the first two frames, we use our model to predict object trajectories through the whole video (see Figure 4). We use the model trained on top-view pybullet videos, without fine-tuning weights. We measure the error between predictions and ground truth positions along the rollout. Results are shown in Table 2 and clearly demonstrate that out approach outperforms the linear and MLP baselines. 4.2 Evaluating object following Evaluation on long roll-outs. We ran at test time longer roll-outs (up to 30 frames), iteratively corrected by our occlusion-aware refinement procedure. This can be viewed as a form of tracking evaluation. Table 3 shows the percentage of object predictions that diverge by more than an object diameter (20 pixels) using this method. The performance is very good, even for tilted views. Supplementary Figure 1 shows these numbers as a function of the pixel threshold. Roll-outs (without refinement) are provided in the following anonymous google drive (link), showing qualitatively convincing trajectories and bouncing behaviors, even for tilted views. Evaluation on IntPhys benchmark. Riochet et al. (2018) propose a benchmark to evaluate intuitive physics models. Drawing inspiration from infant development studies, the benchmark consists of classifying whether a particular video is physically possible or impossible. We focus on the task O1 / Occluded / Dynamic_1, evaluating the notion of object permanence in presence of occlusions. From the provided train set 2 containing only possible videos, we train our model to predict a sequence of 8 frames from an input pair of frames. The evaluation subset O1 / Occluded / Dynamic_1 contains on 720 videos, forming 180 quadruplet of (2 possible / 2 impossible) videos. Starting from the first visible position of an object, we predict its trajectory until the end of the video, refining prediction at every time step. For each video, the predicted masks are compared with the observed masks, resulting in a sequence of reconstruction errors. We derive an implausibility score for a video as the maximum error through the whole sequence. For each quadruplet of (2 possible / 2 impossible) videos, we classify the two videos that have the highest implausibility score as impossible, the two other as possible. Table 4 reports error rates, in comparison with baselines from Riochet et al. (2018). We can see a clear improvement of our method, confirming it can follow objects through long occlusions. 5 Discussion Learning the physics of simple macroscopic object dynamics and interactions is a relatively easy task when ground truth coordinates are provided to the system, and techniques like Interaction Networks trained with a future frame prediction loss are quite successful (Battaglia et al., 2016). Of course a major drawback of this kind of system is that it is basically restricted 2www.intphys.com to learning physics from 3D simulators. In real life, the ground truth coordinates of each object are unknown, only projected 2D views are available. Interestingly, we found that projective geometry is not, in and of itself, a difficulty. Indeed, when an Interaction Network is fed, not with 3D Cartesian object coordinates, but with a 2.5D projective referential such as the xy position of objects in a retina (plus depth), the accuracy of the prediction remains unchanged compared with the Cartesian ground truth. As RGBD videos are relatively easy to collect in large quantities, it would not be difficult to train systems with such inputs. But real world videos raise two other major difficulties: (i) images are not easily segmentable into objects, and (ii) objects do not remain always visible and tend to be occluded by other objects. This makes the ground truth coordinates of objects only partially observable. Here, we provided a first step towards more realistic physics learning by addressing the occlusion problem. We introduce a physics learning system composed of an Interaction Network followed by a trainable Renderer. The Interaction Network has been made recurrent, such that ground truth positions and velocities have only to be fed at the first frame. The renderer can be qualified as 2.5D, in that it takes as input the positions and velocities of objects (in retina pixel xy-d coordinates) and computes the resulting instance masks (plus depth). The 2.5D renderer is itself relatively lightweight (only 1233 parameters). It is based on a rather simple convolutional architecture, uses position fields, and can be trained with few examples to render objects of arbitrary shapes with controlable 2.5D positions respecting occlusions. The outcome can be seen on the rendering of tilted views as shown in the videos provided in the anonymous google drive (link). What we showed is that instead of training the interaction network to predict ground truth positions, we can directly train it through estimates obtained from mask+depth, corrected through the renderer. The resulting system, of course, produces less accurate predictions that when trained with real positions, but is still better than either linear baselines, or CNN mask prediction networks by (Riochet et al., 2018; Lerer et al., 2016). Interestingly, the reconstruction loss is still effective even in the presence of external occluders, or when objects occlude each other because of a tilted view during training. This can be explained by the fact that when an object is occluded, the gradient of the reconstruction loss will be zero (because no matter where the object is predicted to be, so long as it is predicted to be behind another object, it is not visible, hence contributes to no loss). This amounts to simply reducing the size of the training set, so it only slightly degrades the final performance. Importantly, this cancellation of the losses occurs without explicitly telling the system which objects are occluded and which are not. This is implicitly learnt by the system through the rendering network. Applying this method to the intuitive physics benchmark presented in (Riochet et al., 2018), we show it outperforms baselines on modelling object dynamics under occlusions. Further work needs to be done to fully train this system end-to-end, in particular, by learning the renderer and the interaction network jointly. Another avenues relate to the first problem raised above, i.e. the segmentation problem. Object segmentation based on raw pixels has been addressed in previous work, but yields errors (over- or under-segmentations) on more realistic datasets, which could have dramatic effect on the interaction network, which crucially depends on reliable object identification. Such issues need to be addressed before end-to-end physics prediction systems can be trained and used with real videos, and approximate the ability of infants to predict the interactions of objects from live or video scenes. S1. Description of supplementary videos The videos are in the anonymous google drive: https://drive.google.com/drive/ folders/111XK6GZnmHjd_7US6LGkxg6cAhJ2WDxQ?usp=sharing in the videos/ subdirectory. See also the README slideshow. • scene_overview.mp4 shows raw videos of the entire environment. • tracking_occlusions_*.mp4 show examples of position prediction through complete occlusions, using our occlusion-aware object position refinement. This shows that our model can keep track of the object identity through complete occlusions, mimicking “object permanence". • one_class*.mp4 show different examples of our model following motion of multiple objects in the scene. All balls have the same color which makes them difficult to follow in case of mutual interactions. Videos come from tilted 25◦ experiments, which are the most challenging because they include inter-object occlusions. Dots represent the predicted position of each object, the color being its identity. Our model shows very good predictions with small colored markers (dots) well centered in the middle of each object, with marker color remaining constant for each object preserving the object identity during occlusions and collisions. one_class_raw*.mp4 show rendered original views of the same dynamic scenes but imaged from a different viewpoint for better understanding. • rollout_0.mp4, rollout_1.mp4 show three different rollouts without position refinement. From left to right: ground truth trajectories, our model trained of state, our model trained on masks, our model trained on masks with occlusions during training. Rollout length is 20 frames. • rollout_tilt*_model.mp4 and rollout_tilt*_groundtruth.mp4 show the same dynamic scene but observed with various camera tilts (e.g. tilt45_model.mp4 show a video for a camera tilt of 45 degrees). *_model.mp4 are rollouts of our Recurrent Interaction Network (RecIntNet) computed without the occlusion-aware position refinement based on the observed masks (pure forward prediction of the dynamics model). *_groundtruth.mp4 are the corresponding ground-truth trajectories, rendered with the Compositional Rendering Network. • intphys_*.mp4 show object following in the IntPhys training set. • rollout_pybullet_*.mp4 show free rollout (no refinement) on synthetic dataset. • rollout_real_*.mp4 show generalization to real scenes. S2. Datasets To validate our model, we use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five dataset versions, where we vary the camera tilt and the presence of occluders. All experiments are made with datasets of 12,000 videos of 30 frames (with a frame rate of 20 frames per second). For each dataset, we keep 2,000 videos separate to pre-train the renderer, 9, 000 videos to train the physics predictor and 1, 000 videos for evaluation. Our scene contains a variable number of balls (up to 6) with random initial positions and velocities, bouncing against each other and the walls. Initial positions are sampled from a uniform distribution in the box [1, 200]2, all balls lying on the ground. Initial velocities along x and y axes are sampled in Unif([−25, 25]) units per frame, initial velocity along z-axis is set to 0. The radius of each ball is sampled uniformly in [10, 40]. Scenes also contain a variable number of boxes (up to 2) fixed to the floor, against which balls can collide. Contrary to Battaglia et al. (2016) where authors set a frame rate of 1000 frames per second, we sample 30 frames per second, which is more reasonable when working with masks (because of the computation cost of mask prediction). Top-view. In the first dataset we record videos with a top camera view, where the borders of the frame coincide with the walls of the box. Here, initial motion is orthogonal to the camera, which makes this dataset very similar to the 2D bouncing balls datasets presented in Battaglia et al. (2016) and Watters et al. (2017). However, our dataset is 3D and because of collisions and the fact that the balls have different sizes, balls can jump on top of each other, making occlusions possible, even if not frequent. Top-view with Occlusions. To test the ability of our method to learn object dynamics in environments where occlusions occur frequently, we record the second dataset including frequent occlusions. We add an occluder to the scene, which is an object of irregular shape (an airplane), occluding 25% of the frame and moving in 3D between the balls and the camera. This occluder has a rectilinear motion and goes from the bottom to the top of the frame during the whole video sequence. Sample frames and rendered predictions can be found in the supplementary material. Tilted-views. In three additional datasets we keep the same objects and motions but tilt the camera with angles of 45◦, 65◦ and 75◦ degrees. Increasing the tilt of the camera results in more severe inter-object occlusions (both partial and complete) where the balls pass in front of each other, and in front and behind the static boxes, at different distances to the camera. In addition, the ball trajectories are becoming more complex due to increasing perspective effects. In contrary to the top-view experiment, the motion is not orthogonal to the camera plane anymore, and depth becomes crucial to predict the future motion. S3. Training details This section gives details of the offline Pre-Training of the compositional Rendering Network and detailed outline of the algorithm for training the Recurrent Interaction Network. Pre-Training the Compositional Rendering Network. We train the neural renderer to predict mask and depth M̂t, D̂t from a list of objects [px, py, d, c] where px, py are x-y coordinates of the object in the frame, d is the distance between the object and the camera and c is a vector for intrinsic object properties containing the size of the object, its class (in our experiments a binary variable for whether the object is a ball, a square or an occluder) and its color as vector in [0, 1]3. The target mask is a 128 × 128 image where each pixel value indicates the index of the corresponding object mask (0 for the background, i ∈ 1..N for objects). The loss on the 1https://pypi.org/project/pybullet mask is negative log-likelihood, which corresponds to the average classification loss on each pixel Lmask(M̂,M) = ∑ i≤h,j≤w ∑ n≤N 1(Mi,j = n)log(M̂i,j,n), (1) where the first sum is over individual pixels indexed by i and j, the second sum is over the individual objects indexed by n, ∀M̂ ∈ [0, 1]h×w×N are the predicted (soft-) object masks, and ∀M ∈ [[1, N []h×w is the scene ground truth mask containing all objects. The target depth map is a 128× 128 image with values being normalized to the [-1,1] interval during training. The loss on the depth map prediction is the mean squared error Ldepth(D̂,D) = ∑ i≤h,j≤w (D̂i,j −Di,j)2, (2) where ∀D̂ and D ∈ Rh×w are the predicted and ground truth depth maps, respectively. The final loss used to train the renderer is the weighted sum of losses on masks and depth maps, L = 0.7 ∗ Lmask + 0.3 ∗ Ldepth. We use the Adam optimizer with default parameters, and reduce learning rate by a factor 10 each time the loss on the validation set does not decrease during 10 epochs. We pre-train the network on a separate set of 15000 images generated with pybullet and containing similar objects as in our videos. Training details of the Recurrent Interaction Network. The detailed outline of training the Recurrent Interaction Network is given in Algorithm 1. Algorithm 1: Train Recurrent Interaction Network Data: T, L: length of the video and prediction span, respectively; Mt, t = 1..T : Instance masks; Dt, t = 1..T : Depth maps; Rend: Pre-trained Renderer; RIN: Recurrent Interaction Network (initialized with constant velocity motion); Criterion: stopping criterion (RIN loss on validation); Detection(mt,dt): returns centroid, depth and size of instance masks; NLL, MSE: Negative Log-Likelihood and Mean-Squarred Error, respectively; Result: Trajectory estimates p̄t+1..t+L; Trained Recurrent Interaction Network wRIN; while Criterion(RIN) do for t ∈ {1..T − 1} do // Initialization of positions and velocities // Initial object positions from observed masks and depths p̂t ← Detection(mt, dt); // Occlusion-aware object position refinement using Renderer p̄t ← arg minp←p̂t NLL(Rend(p), (mt, dt)); // Estimate object velocities from consecutive frames v̄t ← p̄t+1 − p̄t ; for t ∈ {1..T − L} do // Training Recurrent Interaction Network // Predict sequence of states (of all objects) using roll-out p̂t+1..t+L ← RIN(p̄t, v̄t); // Occlusion-aware object position refinement p̄t+1..t+L ← arg minp←p̂t+1..t+L NLL(Rend(p), (mt, dt)); // Update weights of Recurrent Interaction Network wRIN ← arg minw MSE(RIN(p̄t), p̂t+1..t+L); Given an initial state st, the Recurrent Interaction Network recursively predicts a sequence of future states ŝt+1, ŝt+2, ..., ŝt+L, as well as error terms τ̂t+1, τ̂t+2, ..., τ̂t+L. This predicted sequence is compared to object positions (ground truth or derived from masks after refinement), and the loss is computed as the sum of negative log likelihood (??) along the sequence. Top view Top view+ 45◦ tilt 25◦ tilt 15◦ tilt occlusion CNN autoencoder Riochet et al. (2018) 0.0147 0.0451 0.0125 0.0124 0.0121 RIN, trained on mask+depth 0.0101 0.0342 0.0072 0.0070 0.0069 Proba-RIN, trained on mask+depth 0.0100 0.0351 0.0069 0.0071 0.0065 Table S1. Aggregate pixel reconstruction error for mask and depth, for a prediction span of two frames. This error is the loss used for training (described in the supplementary material). It is a weighted combination of mask error (per-pixel classification error) and the depth error (mean squared error). We use Adam optimizer, and divide the learning rate by L to be consistent with the size of the sequence (as the loss is a sum over a sequence of length L). The same learning rate decay and stopping procedure is applied. Sequence lengths of 4, 6 and 10 were tested during training, lengths of 10 giving slightly more stable rollouts. S4. Occlusion-aware refinement of object positions Position refinement consists of using the pre-trained Renderer to correct estimated positions of all objects in a particular frame. To do so, we give the position estimates as input to the Renderer which outputs a corresponding pair of mask and depth field for the frame, (M̂, D̂), properly rendering the inter-object occlusions. This prediction is compared to the observed mask and depth field, returning errors that are backpropagated through the frozen weights of the Renderer. We perform gradient descent on the input itself to correct object position and size estimates, according to the observations. In our experiments, we set learning rate to 0.01 and compute 200 iterations of gradient descent. Details of the loss are given in the supplementary material. For object positions estimated from object masks, this refinement allows us to reduce errors due to partial occlusions (moving the predicted center of one object from its visible mask centroid to its real center). S4. Future prediction: Comparison with Riochet et al. (2018) We evaluate the error of the mask and depth prediction, measured by the training error described in detail in . Here, we compare our model to a CNN autoencoder Riochet et al. (2018), which directly predicts future masks from current ones, without explicitly modelling dynamics of the individual objects in the scene. Note this baseline is similar to Lerer et al. (2016). Results are shown in Table S1. As before, the existence of external occluders or the presence of tilt degrades the performance, but even in this case, our model remains much better than the CNN autoencoder of Riochet et al. (2018). S5. Detailed roll-out results In Figure S1, we report the proportion of correctly followed objects for different rollout lengths (5, 10 and 30 frames) as a function of the distance error (pixels). Note that the size of the smallest object is around 20 pixels.
1. How does the proposed method handle diversity in predicting future trajectories? 2. How does the proposed method compare with other state-of-the-art future prediction approaches? 3. How is the "aggregate pixel reconstruction error" computed in Table S1? 4. Are there any missing references in the paper? 5. How does the proposed method handle occlusions in predicting future trajectories? 6. What is the main contribution of the paper, and how does it differ from other video prediction models? 7. How can the output of the model be more diverse, and what would be the benefit of that? 8. Can the authors provide more experiments with other baselines to strengthen their claims?
Review
Review This paper proposes a method to predict future trajectories by modeling partial and full occlusions. Although it is well-written and the topic sounds interesting, I failed to catch why this approach is required for this setting. So, to strengthen the message of this paper, I listed a couple of suggestions and comments below (from the most important to the least important): 1. It is a bit hard to catch how this model handles "diversity." Specifically, when predicting the futures, it should be able to generate stochastic outputs. However, I failed to find how diverse the output of the model is. If the output is not that stochastic, then it would be tough to believe that the model can "predict" the future; instead, it may "extrapolate" the current condition only. To reassure such concerns, I recommend reporting how diverse your output is. (One easy way is to report the variance of the predicted center mass values between multiple samples while reporting the l2 distance.) 2. For the future prediction task, it would be much better if it is compared with various state-of-the-art future prediction approaches [1, 2, 3, 4, 5, 6]. For some of the models, it could not be able to compare directly with this approach (e.g., lack of 'center of mass' information). However, it would be still okay once it is compared with other state-of-the-art results without feeding some 3D information (e.g., provide projected 2D video as an input). By doing so, I believe the readers can easily catch (1) why it is better to predict physical interaction in 3D space (instead of directly predicting from a 2D space), and (2) also why predicting occlusion is essential in this problem setting. 3. Minor comments: (a) It is a bit hard to catch how the author computes the "aggregate pixel reconstruction error" in Table S1. I recommend adding an equation number there to make it clear. (b) There are a couple of missing references: the last sentence on page 4, the first paragraph in Supplementary, the last sentence in Supplementary page 3, etc. (c) \citep is often misused. Please replace some inappropriate \citep with \citet. (d) Please check the format of the reference, as well; currently, it has various styles even for the same source/conference. ------------------------------------------------------------------ [Some comments based on the authors' rebuttal] I thank the authors for their thorough comments and detailed explanations for each question. I carefully read the whole (not just my part), but it didn't change my mind; it would be much better if the claim comes with a more directly comparable result. Some additional comments: Q1-comment) I think the limitation of "learning to extrapolate"-style video prediction approach is partially presented in Reviewer #2's claim as well. Therefore, in this context, I recommend the author to show a better result to reassure the reader's concern. Q2-comment) I at least strongly recommend to add more experiments with other baselines, rather than relying mainly on the original model of the dataset. Although the input condition of a model could be different, I at least do believe that it will help the readers to catch the benefit of your setting more clearly. I hope this review phase would make your paper more powerful. [1] Liang et al., Dual Motion GAN for Future-Flow Embedded Video Prediction, in ICCV, 2017 [2] Denton and Fergus, Stochastic Video Generation with a Learned Prior, in ICML, 2018 [3] Wichers et al., Hierarchical Long-term Video Prediction without Supervision, in ICML, 2018 [4] Wang et al., Video-to-Video Synthesis, in NeurIPS, 2018 [5] Heish et al., Learning to Decompose and Disentangle Representations for Video Prediction, in NeurIPS, 2018 [6] Minderer et al., Unsupervised Learning of Object Structure and Dynamics from Videos, in NeurIPS, 2019
ICLR
Title Occlusion resistant learning of intuitive physics from videos Abstract To reach human performance on complex tasks, a key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation. This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences. Yet, most these methods are restricted to the case where no occlusions occur, narrowing the potential areas of application. The main contribution of this paper is a method combining a predictor of object dynamics and a neural renderer efficiently predicting future trajectories and explicitly modelling partial and full occlusions among objects. We present a training procedure enabling learning intuitive physics directly from the input videos containing segmentation masks of objects and their depth. Our results show that our model learns object dynamics despite significant inter-object occlusions, and realistically predicts segmentation masks up to 30 frames in the future. We study model performance for increasing levels of occlusions, and compare results to previous work on the tasks of future prediction and object following. We also show results on predicting motion of objects in real videos and demonstrate significant improvements over state-of-the-art on the object permanence task in the intuitive physics benchmark of Riochet et al. (2018). 1 Introduction Learning intuitive physics has recently raised significant interest in the machine learning literature. To reach human performance on complex visual tasks, artificial systems need to understand the world in terms of macroscopic objects, movements, interactions, etc. Infant development experiments show that young infants quickly acquire an intuitive grasp of how objects interact in the world, and that they use these intuitions for prediction and action planning (Carey, 2009; Baillargeon & Carey, 2012). This includes the notions of gravity (Carey, 2009), continuity of trajectories (Spelke et al., 1995), collisions (Saxe & Carey, 2006), etc. Object permanence, the fact that an object continues to exist when it is occluded, (Kellman & Spelke, 1983), is one of the first concepts developed by infants. From a modeling point of view, the key scientific question is how to develop general-purpose methods that can make physical predictions in noisy environments, where many variables of the system are unknown. A model that could mimic even some of infant’s ability to predict the dynamics of objects and their interactions would be a significant advancement in model-based action planning for robotics (Agrawal et al., 2016; Finn & Levine, 2017). Importantly, to be applied to real-world problems, such a model needs to predict object motion in 3D and handle frequent inter-object occlusions. Yet, to our knowledge, most current works on learning intuitive physics get around this challenge by either i) working in 2D spaces with no occlusions (Battaglia et al., 2016; Chang et al., 2016; Fragkiadaki et al., 2015) or ii) learning end-to-end models without decomposing the scene into objects (Agrawal et al., 2016; Lerer et al., 2016; Finn et al., 2016). The former methods have demonstrated that learning models of intuitive physics is possible but assume ground truth positions of objects are available at both training and test time. The latter methods can operate directly on pixel inputs without knowing ground truth positions of objects but are typically limited to a small number of objects and generalize poorly to new setups (e.g. a new number of objects in the scene, see (Lerer et al., 2016)). A third class of methods has recently emerged (Janner et al., 2018) that first decomposes the input image of the 3D scene into layers corresponding to masks of individual objects and learns scene dynamics given such object-centric decomposition. Note that here the object dynamics is learnt from pixel masks of individual objects, rather than their ground truth positions. This is difficult for 3D scenes due to frequent inter-object occlusions that present two major challenges. First, estimating accurate position and velocity of objects is challenging due to partial occlusions by other objects. Second, objects can be fully occluded by other objects for a significant number of frames. This work falls into the third class of compositional methods, but develops an occlusion resistant model for learning intuitive physics that addresses both of these challenges due to inter-object occlusions. In detail we propose a compositional model that from object instance masks and depth fields in two consecutive frames, (Mt,t+1, Dt,t+1), estimates the center, velocity and size of objects. This predicted state ŝt is then used as input of a Recurrent Interaction Network, which predicts a sequence of futures states ŝt+2, ..., ŝt+L. This sequence of states is given to the Compositional Rendering Network which produces segmentation masks M̂t+2, ..., M̂t+L and depth estimates D̂t+2, ..., D̂t+L in future frames. The key innovation of the proposed model is dealing with partial and complete occlusions in the scene. To deal with partial occlusions, the obtained sequence of masks+depths is compared to the ground truth, and gradients are backpropagated through the pre-trained Compositional Rendering Network to refine state predictions. This allows us to refine positions of partially occluded objects where simply taking the centroid of the observed portion of the mask results in an incorrect estimate of the object position. With this refinement object positions are corrected taking into account the unobserved (occluded) portion of the object. The refined state estimates s̄t+1, ..., s̄t+L are used at training time for learning parameters of the Recurrent Interaction Network and at test time to improve accuracy of object position prediction when following partially occluded objects. To deal with full occlusions, when the object is not visible in multiple frames, we use the learnt model of object dynamics (Recurrent Interaction Network) to predict the position of the object multiple frames ahead and thus recovering the object position after the occlusion. Using the proposed approach, we show that it is possible to learn object dynamics in 3D environments with severe inter-object occlusions and predict segmentation masks up to 30 frames in the future despite occlusion other objects thus mimicking object permanence. 2 Related work Forward modelling in videos. Forward modelling in video has been studied for action planning (Ebert et al., 2018; Finn et al., 2016) and as a scheme for unsupervised learning of visual features (Lan et al., 2014; Mathieu et al., 2015). In that setup, a model is given a sequence of frames and has to generate frames in future time steps. To succeed in this task, such models need to predict object movements, suggesting that they need to learn physical regularities from video. However, models for end-to-end future frame prediction tend to perform poorly on long-term prediction tasks (say more 5-8 frames (Lan et al., 2014; Mathieu et al., 2015; Finn et al., 2016)), failing to preserve object properties and generating blurry outputs. This suggests that models for intuitive physics may require a more structured representation of objects and their interactions. Learning dynamics of objects. Longer term predictions can be more successful when done on the level of trajectories of individual objects. For example, in (Wu et al., 2017b), the authors propose "scene de-rendering", a system that builds an object-based, structured representation from a static (synthetic) image. The recovered state can be further used for physical reasoning and future prediction using a physics engine on both synthetic and real data (Battaglia et al., 2013; Wu et al., 2017a). Future prediction from static image is often multi-modal (e.g. car can move forward or backward) and hence models able to predict multiple possible future predictions, e.g. based on variational auto-encoders (Xue et al., 2016), are needed. Others have developed structured models that factor object motion and object rendering into two learnable modules. Examples include (Watters et al., 2017; Marco Fraccaro, 2017; Ehrhardt et al., 2017b;a) that combine object-centric dynamic models and visual encoders. Such models parse each frame into a set of object state representations, which are used as input of a "dynamic" model, predicting object motion. However, (Marco Fraccaro, 2017) restrict drastically the complexity of the visual input by working on binary 32x32 frames, and (Ehrhardt et al., 2017b;a; Watters et al., 2017) still need ground truth position of objects to train their models. None of these work explicitly models inter-object occlusions, which is the focus of our method. In our work, we build on learnable models of object dynamics (Battaglia et al., 2016) and (Chang et al., 2016), which have the key property that they are compositional and hence can model a variable number of objects, but extend them to learn from visual input rather than ground truth object state vectors. Our work is related to (Janner et al., 2018), done independently and concurrently with our work, who develop an object-oriented model of dynamics coupled with a differentiable object renderer to predict a single image with segmentation masks of objects in a future time, given a single still image as input. In contrast, our model predicts frame-by-frame object motion in scenes with partial and full object occlusion. This is possible because (i) our model of dynamics is recursive, predicting a whole sequence of object movements (instead of one single image in future (Janner et al., 2018)) that allows the model to be applied recursively to follow an object through complete occlusion by other objects; (ii) we design a refinement procedure that allows to refine the estimated positions of objects in case of partial occlusions. In addition, in contrast to (Janner et al., 2018) our model predicts velocity of objects and depth of the scene (also taking as input a pair of frames and the depth field). Others have proposed unsupervised methods to discover objects and their interactions in 2d videos (van Steenkiste et al., 2018). It is also possible to construct Hierarchical Relation Networks (Mrowca et al., 2018), representing objects as graphs and predicting interactions between pairs of objects. However, this task is still challenging and requires full supervision in the form of ground truth position and velocity of objects. Learning physical properties from visual inputs. Related are also methods for learning physical properties of objects. Learning of physical properties, such as mass, volume or coefficients of friction and restitution, has been considered in (Wu et al., 2016). Others have looked at predicting the stability and/or the dynamics of towers of blocks (Lerer et al., 2016; Zhang et al., 2016; Li et al., 2016a;b; Mirza et al., 2017; Groth et al., 2018). Our work is complementary. We don’t consider prediction of physical properties but focus on learning models of object dynamics handling inter-object occlusions at both training and test time. (Greff et al., 2019) Contributions. We describe a model that learns complex dynamics of objects in a 3D environment, where inter-object occlusions occur frequently. Our model combines an abstract representation of the scene (position, velocity and depth of objects), with a compositional neural renderer predicting the resulting object masks with depth and explicitly modelling occlusions between objects. This procedure allows us to train the model even when some objects are partially or totally occluded. Unlike (Watters et al., 2017), our model is fully compositional and handles variable number of objects in the scene. Moreover, it does not require as input annotated inter-frame correspondences during training. 3 Occlusion resistant modeling for intuitive physics This section describes our model for occlusion resistant learning of intuitive physics. We first describe the learning set-up considered in this work. We then describe in detail the two main components of our model. In section 3.2 we outline the compositional renderer with occlusion reasoning that predicts object masks given a scene state representation, and in section 3.3 we detail the recurrent interaction network that predicts the scene state evolution over time. Finally, in section 3.4 we outline the training procedure. 3.1 Set-up overview As illustrated in Figure 1 (and Algorithm in the Supplementary Material), during learning our method observes a sequence of object instance masks and depth fields Mt,..,t+L, Dt,..,t+L. The mask for each frame is composed of a set of channels where each channel represents pixels corresponding to an individual object, along with their color and shape (boxes or balls of different sizes). The model does not require the knowledge of correspondence between objects over time, which might be difficult to obtain in practice. Our model is composed of two networks described below: a pre-trained occlusion sensitive Compositional Rendering Network (Renderer) which renders masks and depth fields given a set of object positions (also called states), and a trainable Recurrent Interaction Network (RecIntNet) which predicts positions of objects in future frames. 3.2 Occlusion modeling: the Compositional Rendering Network For each pixels: [x,y] MLP xy [px,py, d,c] Intput objects (3 hidden layers) Occlusion predictor Bilinear interpolation x2 3 x (Convolution 3x3) Lmask Ldepth S Object Renderer Object mask Object depth Scene mask Scene depth coordinates (xk, yk, dk) of object k in a frame together with additional dimensions for intrinsic object properties (shape, color and size) (c). The network predicts object’s binary mask, Mk as well as the depth map Dk. The input vector (xk, yk, dk, ck) ∈ Rl is first copied into a (l+2)×16×16 tensor, where each 16×16 cell position contains an identical copy of the input vector together with x and y coordinates of the cell. Adding the x and y coordinates may seem redundant, but this kind of position field enables a very local computation of the shape of the object and avoids a large number of network parameters (similar architectures were recently also studied in (?)). The input tensor is processed with 1 × 1 convolution filters. The resulting 16-channel feature map is further processed by three blocks of convolutions. Each block contains three convolutions with filters of size 1× 1, 3× 3 and 1× 1 respectively, and 4, 4 and 16 feature maps, respectively. We use ReLU pre-activation before each convolution, and up-sample (scale of 2 and bilinear interpolation) feature maps between blocks. The last convolution outputs N + 1 feature maps of size 128 × 128, the first feature map encoding depth and the N last feature maps encoding mask predictions for the individual objects. The object rendering network is applied to all objects present, resulting in a set of masks and depth maps denoted as {(M̂k, D̂k), k = 1..N}. The Occlusion predictor takes as input the masks and depth maps for N objects and aggregates them to construct the final occlusion-consistent mask and depth map. To do so it computes, for each pixel i, j ≤ 128 and object k the following weight: cki,j = eλD̂ k i,j∑N q=1 e λD̂qi,j , k = 1..N, (1) where λ is a parameter learned by the model. The final masks and depth maps are computed as a weighted combination of masks M̂ki,j and depth maps D̂ki,j for individual objects k: M̂i,j = ∑N k=1 c k i,jM̂ k i,j , D̂i,j = ∑N k=1 c k i,jD̂ k i,j , where i, j are output pixel coordinates ∀i, j ≤ 128 and cki,j the weights given by (1). The intuition is that the occlusion renderer constructs the final output (M̂, D̂) by selecting, for every pixel, the mask with minimal depth (corresponding to the object occluding all other objects). For negative values of λ equation (1) is as a softmin, that selects for every pixel the object with minimal predicted depth. Because λ is a trainable parameter, gradient descent forces it to take large negative values, ensuring good occlusion predictions. Also note that this model does not require to be supervised by the depth field to predict occlusions correctly. In this case, the object rendering network still predicts a feature map D̂ that is not equal to the depth anymore but is rather an abstract quantity that preserves the relative order of objects in the view. This allows Renderer to predict occlusions when the target masks are RGB only. However, it still needs depth information about in the input (either true depth or relative ordering). 3.3 Dynamics prediction: the Recurrent Interaction Network (RecIntNet) To model object dynamics, we build on the Interaction Network (Battaglia et al., 2016), which predicts dynamics of a variable number of objects by modelling their pairwise interactions. Here we describe three extensions of the vanilla Interaction Network model. First, we extend the Interaction Network to model 2.5D scenes where position and velocity have a depth component. Second, we extend the Interaction Network to train from the whole sequence of future states and call this new model Recurrent Interaction Network. Third, we introduce variance in the position predictions, to stabilise the learning phase, and avoid penalizing too much very encertain predictions. The three extensions are described below. Modelling compositional object dynamics in 2.5D scenes. As shown in (Battaglia et al., 2016), Interaction Networks can be used to predict object motion both in 3D or in 2D space. Given a list of objects represented by their positions, velocities and size in the Cartesian plane, an Interaction Network models interactions between all pairs of objects, aggregates them over the image and predicts the resulting motion for each object. Here, we model object interactions in 2.5D space, since we have no access to the object position and velocity in the Cartesian space. Instead we have locations and velocities in the image plane plus depth (the distance between the objects and the camera). Training from a sequence of future frames. The vanilla Interaction Network (Battaglia et al., 2016) is trained to predict position and velocity of each object in one step into the future. Here, we learn from multiple future frames. In detail, we "rollout" the Interaction Network to predict a whole sequence of future states as if a standard Interaction Network was applied in recurrent manner. We found that faster training can be achieved by directly predicting changes in the velocity, hence: [p1, v1, c] = [p0 + δtv0 + δt2 2 dv, v0 + dv, c], (2) where p1 and v1 are position and velocity of the object at time t1, p0 and v0 are position and velocity at time t0, and δt = t1 − t0 is the time step. Position and velocity in pixel space (p = [px, py, d] where px, py are the position of the object in the frame), d is depth and v is the velocity in that space. Hence dv can be seen as the acceleration, and (v0 + dv),(p0 + δtv0 + δt2 2 dv) as the first and second order Taylor approximations of velocity and position, respectively. Assuming an initial weight distribution close to zero, this gives the model a prior that the object motion is linear. Prediction uncertainty. To account for prediction uncertainty and stabilize learning, we assume that object position follows a multivariate normal distribution, with diagonal covariance matrix. Each term σ2x, σ2y, σ2d of the covariance matrix represents the uncertainty in prediction, along x-axis, y-axis and depth. Such uncertainty is also given as input to the model, to account for uncertainty either in object detection (first prediction step) or in the recurrent object state prediction. The resulting loss is negative log-likelihood of the target p1 w.r.t. the multivariate normal distribution, which reduces to L ( (p̂1, τ̂1), p1 ) = (p̂1 − p1)2 exp τ̂1 + τ̂1, (3) where τ̂1 = log σ̂21 is the estimated level of noise propagated through the Recurrent Interaction Network, where σ1 concatenates σ2x, σ2y, σ2d, p1 is the ground truth state and p̂1 is the predicted state at time t+ 1. The intuition is that the squared error term in the numerator is weighted by the estimated level of noise τ̂1, which acts also as an additional regularizer. 3.4 System Training In this section we give a high level description of the procedure for training our model (details are in the supplementary Section S3). The Compositional rendering Network is pre-trained offline from masks and depths in individual frames. Training of the Recurrent Interaction Network is done in the following three steps. First (initialization phase), we select from each training video short clips containing L frames (here, we use L=10). In each frame, we estimate position, depth and size of each object by computing the centroid of each mask, its average depth and diameter (max distance between two mask pixels). To correct errors due to partial occlusions, we perform occlusion-aware refinement of object positions using gradient descent through the pre-trained Renderer (see supplementary material). The result is a partial state vector (no velocities) for each frame, corrected for partial occlusions. Spatially close objects in two consecutive frames are linked and considered the same objects. In a second step (prediction phase), we use the Recurrent Interaction Network to roll out L− 2 predictions for the position of these objects in future frames starting from Frame 2. Frame 1 is used to compute the initial velocities of Frame 2. In a third step, (update phase), we use the distance between the ground truth positions established in step 1 and the rollout positions to perform the training of the Recurrent Interaction Network. 4 Experiments In this section we demonstrate the ability of our model to learn intuitive physics in presence of inter-object occlusions. We evaluate our model on two task: (i) future prediction, predicting objects’ trajectories up to a horizon of 10 frames and (ii) object following, coupling the dynamics network with the neural renderer to follow objects under occlusions, up to a horizon of 30 frames. In future prediction, we initialize the network with two frames, which enable the computation of object positions and velocities based on the instance masks as in the training phase. We then run a roll-out for N consecutive frames with the interaction network. We evaluate this rollout by comparing the predicted positions or reconstructed pixels with the ground truth. In object following, we alternate between short-term rollout (using the interaction network to predict the next frame) and object position refinement (using the renderer). This allows us to put an index on each object, and follow them through large periods of occlusions. During full occlusion, the position is solely determined by the interaction network, since the object position refinement has a zero gradient. During full or partial occlusion, object position refinement is used to reconstruct a better estimate of the positions and velocities. To test object following, we measure the accuracy of the position estimates across long sequences containing occlusions. We also evaluate the ability to detect the violation of object permanence (objects disappearing or appearing out of nowhere). Our evaluation is mostly based on a synthetic dataset, which we release for this paper. We also study generalization to real scenes, and compare to baseline models on the object permanance subset of the intuitive physics benchmark (Riochet et al., 2018). 4.1 Evaluating Future Prediction We use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five datasets, where we vary the camera tilt and the presence of occluders. In the first dataset (“Top view") we record videos with a top camera view (or 90◦), where the borders of the frame coincide with the walls of the box. In the second dataset (“Top view+occ"), we add a large moving object occluding 25% of the scene. Finally, we decrease the camera viewing angle to 45◦, 25◦ and 15◦ degrees, which results in an increasing amount of inter-object object occlusions due to perspective projection of the 3D scene onto a 2D image plane. We computed the proportion of time each object is occluded or partially occluded and found 3.1% in the top-view videos, 31.1% in the top view occluded videos, and 5.9%, 11.7%, 13.4% in the 45◦, 25◦, 15◦ tilted videos, respectively. Additional details of the datasets are given in the supplementary material. Inter-object occlusion investigation. In this section we consider prediction horizons of 5 and 10 frames, and evaluate the position error as a L2 distance between the predicted and target object positions. L2 distance is computed in the 3D Cartesian scene coordinates, such that results are comparable across different camera tilts. Results are shown in Table 1. We first note that our model trained on mask and depth prediction significantly outperforms the linear baseline, which is computed as an extrapolation of the position of objects based on their initial velocities. Moreover, the results of our method are relatively stable across challenging setups with occlusions by external objects or frequent self-occlusions in tilted views. This demonstrates the potential ability of our method to be trained from real videos where occlusions and other factors usually prevent reliable recovery of object states. Ablation Studies. As an ablation study we replace the Recurrent Interaction Network (RecIntNet) in our model with a multi-layer perceptron. This MLP contains four hidden layers of size 180 and is trained the same way as RecIntNet, modelling acceleration as described in equation 3.3. To deal with the varying number of objects in the dataset, we pad the inputs with zeros. We observe that RecIntNet allows more robust predictions through time. As a second ablation study, we train the Recurrent Interaction Network without modelling acceleration (3.3). This is similar to the model described in (Janner et al., 2018), where object representation is not decomposed into position / velocity / intrinsic properties, but is 1https://pypi.org/project/pybullet rather a (unstructured) 256-dimensional vector. We observe a significant loss in performance, tending to confirm that modelling position and velocity explicity, and having a constant velocity prior on motion (given by 3.3) improves future predictions. As a third ablation study, we train a deterministic variant of RecIntNet, where only the sequence of states is predicted, without the uncertainty term τ . The loss considered is the mean squared error between the predicted and the observation state. Observed results are slightly worse than our model handling uncertainty (see NoProba-RIN), but close enough to say that this is not a key feature for modelling 5 or 10 frames in the future. In qualitative experiments, however, we observed more robust long-term predictions after introducing the uncertainty term τ in the model and the loss (equation 3). For the purpose of comparison, we also evaluate three models trained using ground truth object states. Our Recurrent Interaction Network trained on ground truth object states gives similar results to the model of (Battaglia et al., 2016). As expected, training on ground truth states (effectively ignoring occlusions and other effects) performs better than training from object masks and depth. We also compare with CNN autoencoder (Riochet et al., 2018), showing our models gives better forward mask and depth predictions than CNN auto-encoders trained end-to-end. Full results are given in the supplementary material. Generalization to real scenes. We construct a dataset of 22 real videos, containing a variable number of colored balls and blocks in motion. Videos are recorded with a Microsoft kinect2 device, including RGB and depth frames. The setup is similar to the one generated with pybullet, recorded with a top camera view and containing 4 balls and a variable number of blocs (from 0 to 3). Here again, the borders of the frame coincide with the walls of the box. Taking as input object segmentation of the first two frames, we use our model to predict object trajectories through the whole video (see Figure 4). We use the model trained on top-view pybullet videos, without fine-tuning weights. We measure the error between predictions and ground truth positions along the rollout. Results are shown in Table 2 and clearly demonstrate that out approach outperforms the linear and MLP baselines. 4.2 Evaluating object following Evaluation on long roll-outs. We ran at test time longer roll-outs (up to 30 frames), iteratively corrected by our occlusion-aware refinement procedure. This can be viewed as a form of tracking evaluation. Table 3 shows the percentage of object predictions that diverge by more than an object diameter (20 pixels) using this method. The performance is very good, even for tilted views. Supplementary Figure 1 shows these numbers as a function of the pixel threshold. Roll-outs (without refinement) are provided in the following anonymous google drive (link), showing qualitatively convincing trajectories and bouncing behaviors, even for tilted views. Evaluation on IntPhys benchmark. Riochet et al. (2018) propose a benchmark to evaluate intuitive physics models. Drawing inspiration from infant development studies, the benchmark consists of classifying whether a particular video is physically possible or impossible. We focus on the task O1 / Occluded / Dynamic_1, evaluating the notion of object permanence in presence of occlusions. From the provided train set 2 containing only possible videos, we train our model to predict a sequence of 8 frames from an input pair of frames. The evaluation subset O1 / Occluded / Dynamic_1 contains on 720 videos, forming 180 quadruplet of (2 possible / 2 impossible) videos. Starting from the first visible position of an object, we predict its trajectory until the end of the video, refining prediction at every time step. For each video, the predicted masks are compared with the observed masks, resulting in a sequence of reconstruction errors. We derive an implausibility score for a video as the maximum error through the whole sequence. For each quadruplet of (2 possible / 2 impossible) videos, we classify the two videos that have the highest implausibility score as impossible, the two other as possible. Table 4 reports error rates, in comparison with baselines from Riochet et al. (2018). We can see a clear improvement of our method, confirming it can follow objects through long occlusions. 5 Discussion Learning the physics of simple macroscopic object dynamics and interactions is a relatively easy task when ground truth coordinates are provided to the system, and techniques like Interaction Networks trained with a future frame prediction loss are quite successful (Battaglia et al., 2016). Of course a major drawback of this kind of system is that it is basically restricted 2www.intphys.com to learning physics from 3D simulators. In real life, the ground truth coordinates of each object are unknown, only projected 2D views are available. Interestingly, we found that projective geometry is not, in and of itself, a difficulty. Indeed, when an Interaction Network is fed, not with 3D Cartesian object coordinates, but with a 2.5D projective referential such as the xy position of objects in a retina (plus depth), the accuracy of the prediction remains unchanged compared with the Cartesian ground truth. As RGBD videos are relatively easy to collect in large quantities, it would not be difficult to train systems with such inputs. But real world videos raise two other major difficulties: (i) images are not easily segmentable into objects, and (ii) objects do not remain always visible and tend to be occluded by other objects. This makes the ground truth coordinates of objects only partially observable. Here, we provided a first step towards more realistic physics learning by addressing the occlusion problem. We introduce a physics learning system composed of an Interaction Network followed by a trainable Renderer. The Interaction Network has been made recurrent, such that ground truth positions and velocities have only to be fed at the first frame. The renderer can be qualified as 2.5D, in that it takes as input the positions and velocities of objects (in retina pixel xy-d coordinates) and computes the resulting instance masks (plus depth). The 2.5D renderer is itself relatively lightweight (only 1233 parameters). It is based on a rather simple convolutional architecture, uses position fields, and can be trained with few examples to render objects of arbitrary shapes with controlable 2.5D positions respecting occlusions. The outcome can be seen on the rendering of tilted views as shown in the videos provided in the anonymous google drive (link). What we showed is that instead of training the interaction network to predict ground truth positions, we can directly train it through estimates obtained from mask+depth, corrected through the renderer. The resulting system, of course, produces less accurate predictions that when trained with real positions, but is still better than either linear baselines, or CNN mask prediction networks by (Riochet et al., 2018; Lerer et al., 2016). Interestingly, the reconstruction loss is still effective even in the presence of external occluders, or when objects occlude each other because of a tilted view during training. This can be explained by the fact that when an object is occluded, the gradient of the reconstruction loss will be zero (because no matter where the object is predicted to be, so long as it is predicted to be behind another object, it is not visible, hence contributes to no loss). This amounts to simply reducing the size of the training set, so it only slightly degrades the final performance. Importantly, this cancellation of the losses occurs without explicitly telling the system which objects are occluded and which are not. This is implicitly learnt by the system through the rendering network. Applying this method to the intuitive physics benchmark presented in (Riochet et al., 2018), we show it outperforms baselines on modelling object dynamics under occlusions. Further work needs to be done to fully train this system end-to-end, in particular, by learning the renderer and the interaction network jointly. Another avenues relate to the first problem raised above, i.e. the segmentation problem. Object segmentation based on raw pixels has been addressed in previous work, but yields errors (over- or under-segmentations) on more realistic datasets, which could have dramatic effect on the interaction network, which crucially depends on reliable object identification. Such issues need to be addressed before end-to-end physics prediction systems can be trained and used with real videos, and approximate the ability of infants to predict the interactions of objects from live or video scenes. S1. Description of supplementary videos The videos are in the anonymous google drive: https://drive.google.com/drive/ folders/111XK6GZnmHjd_7US6LGkxg6cAhJ2WDxQ?usp=sharing in the videos/ subdirectory. See also the README slideshow. • scene_overview.mp4 shows raw videos of the entire environment. • tracking_occlusions_*.mp4 show examples of position prediction through complete occlusions, using our occlusion-aware object position refinement. This shows that our model can keep track of the object identity through complete occlusions, mimicking “object permanence". • one_class*.mp4 show different examples of our model following motion of multiple objects in the scene. All balls have the same color which makes them difficult to follow in case of mutual interactions. Videos come from tilted 25◦ experiments, which are the most challenging because they include inter-object occlusions. Dots represent the predicted position of each object, the color being its identity. Our model shows very good predictions with small colored markers (dots) well centered in the middle of each object, with marker color remaining constant for each object preserving the object identity during occlusions and collisions. one_class_raw*.mp4 show rendered original views of the same dynamic scenes but imaged from a different viewpoint for better understanding. • rollout_0.mp4, rollout_1.mp4 show three different rollouts without position refinement. From left to right: ground truth trajectories, our model trained of state, our model trained on masks, our model trained on masks with occlusions during training. Rollout length is 20 frames. • rollout_tilt*_model.mp4 and rollout_tilt*_groundtruth.mp4 show the same dynamic scene but observed with various camera tilts (e.g. tilt45_model.mp4 show a video for a camera tilt of 45 degrees). *_model.mp4 are rollouts of our Recurrent Interaction Network (RecIntNet) computed without the occlusion-aware position refinement based on the observed masks (pure forward prediction of the dynamics model). *_groundtruth.mp4 are the corresponding ground-truth trajectories, rendered with the Compositional Rendering Network. • intphys_*.mp4 show object following in the IntPhys training set. • rollout_pybullet_*.mp4 show free rollout (no refinement) on synthetic dataset. • rollout_real_*.mp4 show generalization to real scenes. S2. Datasets To validate our model, we use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five dataset versions, where we vary the camera tilt and the presence of occluders. All experiments are made with datasets of 12,000 videos of 30 frames (with a frame rate of 20 frames per second). For each dataset, we keep 2,000 videos separate to pre-train the renderer, 9, 000 videos to train the physics predictor and 1, 000 videos for evaluation. Our scene contains a variable number of balls (up to 6) with random initial positions and velocities, bouncing against each other and the walls. Initial positions are sampled from a uniform distribution in the box [1, 200]2, all balls lying on the ground. Initial velocities along x and y axes are sampled in Unif([−25, 25]) units per frame, initial velocity along z-axis is set to 0. The radius of each ball is sampled uniformly in [10, 40]. Scenes also contain a variable number of boxes (up to 2) fixed to the floor, against which balls can collide. Contrary to Battaglia et al. (2016) where authors set a frame rate of 1000 frames per second, we sample 30 frames per second, which is more reasonable when working with masks (because of the computation cost of mask prediction). Top-view. In the first dataset we record videos with a top camera view, where the borders of the frame coincide with the walls of the box. Here, initial motion is orthogonal to the camera, which makes this dataset very similar to the 2D bouncing balls datasets presented in Battaglia et al. (2016) and Watters et al. (2017). However, our dataset is 3D and because of collisions and the fact that the balls have different sizes, balls can jump on top of each other, making occlusions possible, even if not frequent. Top-view with Occlusions. To test the ability of our method to learn object dynamics in environments where occlusions occur frequently, we record the second dataset including frequent occlusions. We add an occluder to the scene, which is an object of irregular shape (an airplane), occluding 25% of the frame and moving in 3D between the balls and the camera. This occluder has a rectilinear motion and goes from the bottom to the top of the frame during the whole video sequence. Sample frames and rendered predictions can be found in the supplementary material. Tilted-views. In three additional datasets we keep the same objects and motions but tilt the camera with angles of 45◦, 65◦ and 75◦ degrees. Increasing the tilt of the camera results in more severe inter-object occlusions (both partial and complete) where the balls pass in front of each other, and in front and behind the static boxes, at different distances to the camera. In addition, the ball trajectories are becoming more complex due to increasing perspective effects. In contrary to the top-view experiment, the motion is not orthogonal to the camera plane anymore, and depth becomes crucial to predict the future motion. S3. Training details This section gives details of the offline Pre-Training of the compositional Rendering Network and detailed outline of the algorithm for training the Recurrent Interaction Network. Pre-Training the Compositional Rendering Network. We train the neural renderer to predict mask and depth M̂t, D̂t from a list of objects [px, py, d, c] where px, py are x-y coordinates of the object in the frame, d is the distance between the object and the camera and c is a vector for intrinsic object properties containing the size of the object, its class (in our experiments a binary variable for whether the object is a ball, a square or an occluder) and its color as vector in [0, 1]3. The target mask is a 128 × 128 image where each pixel value indicates the index of the corresponding object mask (0 for the background, i ∈ 1..N for objects). The loss on the 1https://pypi.org/project/pybullet mask is negative log-likelihood, which corresponds to the average classification loss on each pixel Lmask(M̂,M) = ∑ i≤h,j≤w ∑ n≤N 1(Mi,j = n)log(M̂i,j,n), (1) where the first sum is over individual pixels indexed by i and j, the second sum is over the individual objects indexed by n, ∀M̂ ∈ [0, 1]h×w×N are the predicted (soft-) object masks, and ∀M ∈ [[1, N []h×w is the scene ground truth mask containing all objects. The target depth map is a 128× 128 image with values being normalized to the [-1,1] interval during training. The loss on the depth map prediction is the mean squared error Ldepth(D̂,D) = ∑ i≤h,j≤w (D̂i,j −Di,j)2, (2) where ∀D̂ and D ∈ Rh×w are the predicted and ground truth depth maps, respectively. The final loss used to train the renderer is the weighted sum of losses on masks and depth maps, L = 0.7 ∗ Lmask + 0.3 ∗ Ldepth. We use the Adam optimizer with default parameters, and reduce learning rate by a factor 10 each time the loss on the validation set does not decrease during 10 epochs. We pre-train the network on a separate set of 15000 images generated with pybullet and containing similar objects as in our videos. Training details of the Recurrent Interaction Network. The detailed outline of training the Recurrent Interaction Network is given in Algorithm 1. Algorithm 1: Train Recurrent Interaction Network Data: T, L: length of the video and prediction span, respectively; Mt, t = 1..T : Instance masks; Dt, t = 1..T : Depth maps; Rend: Pre-trained Renderer; RIN: Recurrent Interaction Network (initialized with constant velocity motion); Criterion: stopping criterion (RIN loss on validation); Detection(mt,dt): returns centroid, depth and size of instance masks; NLL, MSE: Negative Log-Likelihood and Mean-Squarred Error, respectively; Result: Trajectory estimates p̄t+1..t+L; Trained Recurrent Interaction Network wRIN; while Criterion(RIN) do for t ∈ {1..T − 1} do // Initialization of positions and velocities // Initial object positions from observed masks and depths p̂t ← Detection(mt, dt); // Occlusion-aware object position refinement using Renderer p̄t ← arg minp←p̂t NLL(Rend(p), (mt, dt)); // Estimate object velocities from consecutive frames v̄t ← p̄t+1 − p̄t ; for t ∈ {1..T − L} do // Training Recurrent Interaction Network // Predict sequence of states (of all objects) using roll-out p̂t+1..t+L ← RIN(p̄t, v̄t); // Occlusion-aware object position refinement p̄t+1..t+L ← arg minp←p̂t+1..t+L NLL(Rend(p), (mt, dt)); // Update weights of Recurrent Interaction Network wRIN ← arg minw MSE(RIN(p̄t), p̂t+1..t+L); Given an initial state st, the Recurrent Interaction Network recursively predicts a sequence of future states ŝt+1, ŝt+2, ..., ŝt+L, as well as error terms τ̂t+1, τ̂t+2, ..., τ̂t+L. This predicted sequence is compared to object positions (ground truth or derived from masks after refinement), and the loss is computed as the sum of negative log likelihood (??) along the sequence. Top view Top view+ 45◦ tilt 25◦ tilt 15◦ tilt occlusion CNN autoencoder Riochet et al. (2018) 0.0147 0.0451 0.0125 0.0124 0.0121 RIN, trained on mask+depth 0.0101 0.0342 0.0072 0.0070 0.0069 Proba-RIN, trained on mask+depth 0.0100 0.0351 0.0069 0.0071 0.0065 Table S1. Aggregate pixel reconstruction error for mask and depth, for a prediction span of two frames. This error is the loss used for training (described in the supplementary material). It is a weighted combination of mask error (per-pixel classification error) and the depth error (mean squared error). We use Adam optimizer, and divide the learning rate by L to be consistent with the size of the sequence (as the loss is a sum over a sequence of length L). The same learning rate decay and stopping procedure is applied. Sequence lengths of 4, 6 and 10 were tested during training, lengths of 10 giving slightly more stable rollouts. S4. Occlusion-aware refinement of object positions Position refinement consists of using the pre-trained Renderer to correct estimated positions of all objects in a particular frame. To do so, we give the position estimates as input to the Renderer which outputs a corresponding pair of mask and depth field for the frame, (M̂, D̂), properly rendering the inter-object occlusions. This prediction is compared to the observed mask and depth field, returning errors that are backpropagated through the frozen weights of the Renderer. We perform gradient descent on the input itself to correct object position and size estimates, according to the observations. In our experiments, we set learning rate to 0.01 and compute 200 iterations of gradient descent. Details of the loss are given in the supplementary material. For object positions estimated from object masks, this refinement allows us to reduce errors due to partial occlusions (moving the predicted center of one object from its visible mask centroid to its real center). S4. Future prediction: Comparison with Riochet et al. (2018) We evaluate the error of the mask and depth prediction, measured by the training error described in detail in . Here, we compare our model to a CNN autoencoder Riochet et al. (2018), which directly predicts future masks from current ones, without explicitly modelling dynamics of the individual objects in the scene. Note this baseline is similar to Lerer et al. (2016). Results are shown in Table S1. As before, the existence of external occluders or the presence of tilt degrades the performance, but even in this case, our model remains much better than the CNN autoencoder of Riochet et al. (2018). S5. Detailed roll-out results In Figure S1, we report the proportion of correctly followed objects for different rollout lengths (5, 10 and 30 frames) as a function of the distance error (pixels). Note that the size of the smallest object is around 20 pixels.
1. What is the focus of the paper regarding object processing? 2. What are the strengths and weaknesses of the proposed model? 3. How does the reviewer assess the experimental section and the use of privileged information? 4. What are the suggestions for improving the paper's organization and content? 5. Are there any missing ablation experiments or information regarding the refinement network? 6. How does the reviewer view the stochasticity of the interaction network? 7. Are there any potential references that the reviewer finds relevant but are missing in the paper?
Review
Review * Note: emergency review, done under a shorter time frame than good reviews require. In this paper, the authors develop a highly structured model to predict motions of objects defined by segmentation masks and depths. The model trains a physics model (in the form of a slightly modified interaction network) and a renderer composed of a per-object renderer combined with an occlusion model which composes the per-object segmentation and depth into a scene segmentation and depth. Positives: - The jury is still out on the degree of structure required to do proper object processing (neural nets with large amounts of data, mildly structured nets like networks with attention, more structured nets like this, or a full fledged renderer-like probabilistic program); this work contributes novel work to the line of research which attempts to do object-level processing with structured models while still leveraging the power of neural networks. - The experimental section appears very thorough and convincing, even if the dataset is relatively simple. Negatives: - The model requires highly privileged information (segmentation mask, depth) at training and test time. Given that the segmentation/depth data are not too far from the actual images, it would have been interesting to see if it were possible to work with pixels (a variant of the occlusion model would probably still work), at least at test time. - Regarding using segmentation/depth as input to the model: for a real dataset, segmentation is more relevant: it is both less informative than positions (due to significant occlusions) and easier to measure. In this highly synthetic dataset, this feels more debatable: objects are more entangled in the segmentation (which makes using segmentation more challenging), but only weakly, with many frames with no occlusion; furthermore, segmentation provides object shapes as information. - The paper is generally well written, but could benefit from some reorganization - instead of defining each module separately, it would be better to describe the flow of information through different modules, then describe the module. I was wondering for a while how the initial positions were estimated (required as input to both the interaction net and the renderer), but this only comes at the end of the paper. - Some ablation experiments felt missing, for instance, the importance of the refinement network (also unfortunate that the details of refinement were not given in the main body). - The stochasticity of the interaction network appears a bit weak (simple Gaussians) - it would be interesting to display some data to see if the ground truth data is indeed Gaussian like . - Missing potential references: Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects Learning to Decompose and Disentangle Representations for Video Prediction MONet: Unsupervised Scene Decomposition and Representation
ICLR
Title Occlusion resistant learning of intuitive physics from videos Abstract To reach human performance on complex tasks, a key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation. This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences. Yet, most these methods are restricted to the case where no occlusions occur, narrowing the potential areas of application. The main contribution of this paper is a method combining a predictor of object dynamics and a neural renderer efficiently predicting future trajectories and explicitly modelling partial and full occlusions among objects. We present a training procedure enabling learning intuitive physics directly from the input videos containing segmentation masks of objects and their depth. Our results show that our model learns object dynamics despite significant inter-object occlusions, and realistically predicts segmentation masks up to 30 frames in the future. We study model performance for increasing levels of occlusions, and compare results to previous work on the tasks of future prediction and object following. We also show results on predicting motion of objects in real videos and demonstrate significant improvements over state-of-the-art on the object permanence task in the intuitive physics benchmark of Riochet et al. (2018). 1 Introduction Learning intuitive physics has recently raised significant interest in the machine learning literature. To reach human performance on complex visual tasks, artificial systems need to understand the world in terms of macroscopic objects, movements, interactions, etc. Infant development experiments show that young infants quickly acquire an intuitive grasp of how objects interact in the world, and that they use these intuitions for prediction and action planning (Carey, 2009; Baillargeon & Carey, 2012). This includes the notions of gravity (Carey, 2009), continuity of trajectories (Spelke et al., 1995), collisions (Saxe & Carey, 2006), etc. Object permanence, the fact that an object continues to exist when it is occluded, (Kellman & Spelke, 1983), is one of the first concepts developed by infants. From a modeling point of view, the key scientific question is how to develop general-purpose methods that can make physical predictions in noisy environments, where many variables of the system are unknown. A model that could mimic even some of infant’s ability to predict the dynamics of objects and their interactions would be a significant advancement in model-based action planning for robotics (Agrawal et al., 2016; Finn & Levine, 2017). Importantly, to be applied to real-world problems, such a model needs to predict object motion in 3D and handle frequent inter-object occlusions. Yet, to our knowledge, most current works on learning intuitive physics get around this challenge by either i) working in 2D spaces with no occlusions (Battaglia et al., 2016; Chang et al., 2016; Fragkiadaki et al., 2015) or ii) learning end-to-end models without decomposing the scene into objects (Agrawal et al., 2016; Lerer et al., 2016; Finn et al., 2016). The former methods have demonstrated that learning models of intuitive physics is possible but assume ground truth positions of objects are available at both training and test time. The latter methods can operate directly on pixel inputs without knowing ground truth positions of objects but are typically limited to a small number of objects and generalize poorly to new setups (e.g. a new number of objects in the scene, see (Lerer et al., 2016)). A third class of methods has recently emerged (Janner et al., 2018) that first decomposes the input image of the 3D scene into layers corresponding to masks of individual objects and learns scene dynamics given such object-centric decomposition. Note that here the object dynamics is learnt from pixel masks of individual objects, rather than their ground truth positions. This is difficult for 3D scenes due to frequent inter-object occlusions that present two major challenges. First, estimating accurate position and velocity of objects is challenging due to partial occlusions by other objects. Second, objects can be fully occluded by other objects for a significant number of frames. This work falls into the third class of compositional methods, but develops an occlusion resistant model for learning intuitive physics that addresses both of these challenges due to inter-object occlusions. In detail we propose a compositional model that from object instance masks and depth fields in two consecutive frames, (Mt,t+1, Dt,t+1), estimates the center, velocity and size of objects. This predicted state ŝt is then used as input of a Recurrent Interaction Network, which predicts a sequence of futures states ŝt+2, ..., ŝt+L. This sequence of states is given to the Compositional Rendering Network which produces segmentation masks M̂t+2, ..., M̂t+L and depth estimates D̂t+2, ..., D̂t+L in future frames. The key innovation of the proposed model is dealing with partial and complete occlusions in the scene. To deal with partial occlusions, the obtained sequence of masks+depths is compared to the ground truth, and gradients are backpropagated through the pre-trained Compositional Rendering Network to refine state predictions. This allows us to refine positions of partially occluded objects where simply taking the centroid of the observed portion of the mask results in an incorrect estimate of the object position. With this refinement object positions are corrected taking into account the unobserved (occluded) portion of the object. The refined state estimates s̄t+1, ..., s̄t+L are used at training time for learning parameters of the Recurrent Interaction Network and at test time to improve accuracy of object position prediction when following partially occluded objects. To deal with full occlusions, when the object is not visible in multiple frames, we use the learnt model of object dynamics (Recurrent Interaction Network) to predict the position of the object multiple frames ahead and thus recovering the object position after the occlusion. Using the proposed approach, we show that it is possible to learn object dynamics in 3D environments with severe inter-object occlusions and predict segmentation masks up to 30 frames in the future despite occlusion other objects thus mimicking object permanence. 2 Related work Forward modelling in videos. Forward modelling in video has been studied for action planning (Ebert et al., 2018; Finn et al., 2016) and as a scheme for unsupervised learning of visual features (Lan et al., 2014; Mathieu et al., 2015). In that setup, a model is given a sequence of frames and has to generate frames in future time steps. To succeed in this task, such models need to predict object movements, suggesting that they need to learn physical regularities from video. However, models for end-to-end future frame prediction tend to perform poorly on long-term prediction tasks (say more 5-8 frames (Lan et al., 2014; Mathieu et al., 2015; Finn et al., 2016)), failing to preserve object properties and generating blurry outputs. This suggests that models for intuitive physics may require a more structured representation of objects and their interactions. Learning dynamics of objects. Longer term predictions can be more successful when done on the level of trajectories of individual objects. For example, in (Wu et al., 2017b), the authors propose "scene de-rendering", a system that builds an object-based, structured representation from a static (synthetic) image. The recovered state can be further used for physical reasoning and future prediction using a physics engine on both synthetic and real data (Battaglia et al., 2013; Wu et al., 2017a). Future prediction from static image is often multi-modal (e.g. car can move forward or backward) and hence models able to predict multiple possible future predictions, e.g. based on variational auto-encoders (Xue et al., 2016), are needed. Others have developed structured models that factor object motion and object rendering into two learnable modules. Examples include (Watters et al., 2017; Marco Fraccaro, 2017; Ehrhardt et al., 2017b;a) that combine object-centric dynamic models and visual encoders. Such models parse each frame into a set of object state representations, which are used as input of a "dynamic" model, predicting object motion. However, (Marco Fraccaro, 2017) restrict drastically the complexity of the visual input by working on binary 32x32 frames, and (Ehrhardt et al., 2017b;a; Watters et al., 2017) still need ground truth position of objects to train their models. None of these work explicitly models inter-object occlusions, which is the focus of our method. In our work, we build on learnable models of object dynamics (Battaglia et al., 2016) and (Chang et al., 2016), which have the key property that they are compositional and hence can model a variable number of objects, but extend them to learn from visual input rather than ground truth object state vectors. Our work is related to (Janner et al., 2018), done independently and concurrently with our work, who develop an object-oriented model of dynamics coupled with a differentiable object renderer to predict a single image with segmentation masks of objects in a future time, given a single still image as input. In contrast, our model predicts frame-by-frame object motion in scenes with partial and full object occlusion. This is possible because (i) our model of dynamics is recursive, predicting a whole sequence of object movements (instead of one single image in future (Janner et al., 2018)) that allows the model to be applied recursively to follow an object through complete occlusion by other objects; (ii) we design a refinement procedure that allows to refine the estimated positions of objects in case of partial occlusions. In addition, in contrast to (Janner et al., 2018) our model predicts velocity of objects and depth of the scene (also taking as input a pair of frames and the depth field). Others have proposed unsupervised methods to discover objects and their interactions in 2d videos (van Steenkiste et al., 2018). It is also possible to construct Hierarchical Relation Networks (Mrowca et al., 2018), representing objects as graphs and predicting interactions between pairs of objects. However, this task is still challenging and requires full supervision in the form of ground truth position and velocity of objects. Learning physical properties from visual inputs. Related are also methods for learning physical properties of objects. Learning of physical properties, such as mass, volume or coefficients of friction and restitution, has been considered in (Wu et al., 2016). Others have looked at predicting the stability and/or the dynamics of towers of blocks (Lerer et al., 2016; Zhang et al., 2016; Li et al., 2016a;b; Mirza et al., 2017; Groth et al., 2018). Our work is complementary. We don’t consider prediction of physical properties but focus on learning models of object dynamics handling inter-object occlusions at both training and test time. (Greff et al., 2019) Contributions. We describe a model that learns complex dynamics of objects in a 3D environment, where inter-object occlusions occur frequently. Our model combines an abstract representation of the scene (position, velocity and depth of objects), with a compositional neural renderer predicting the resulting object masks with depth and explicitly modelling occlusions between objects. This procedure allows us to train the model even when some objects are partially or totally occluded. Unlike (Watters et al., 2017), our model is fully compositional and handles variable number of objects in the scene. Moreover, it does not require as input annotated inter-frame correspondences during training. 3 Occlusion resistant modeling for intuitive physics This section describes our model for occlusion resistant learning of intuitive physics. We first describe the learning set-up considered in this work. We then describe in detail the two main components of our model. In section 3.2 we outline the compositional renderer with occlusion reasoning that predicts object masks given a scene state representation, and in section 3.3 we detail the recurrent interaction network that predicts the scene state evolution over time. Finally, in section 3.4 we outline the training procedure. 3.1 Set-up overview As illustrated in Figure 1 (and Algorithm in the Supplementary Material), during learning our method observes a sequence of object instance masks and depth fields Mt,..,t+L, Dt,..,t+L. The mask for each frame is composed of a set of channels where each channel represents pixels corresponding to an individual object, along with their color and shape (boxes or balls of different sizes). The model does not require the knowledge of correspondence between objects over time, which might be difficult to obtain in practice. Our model is composed of two networks described below: a pre-trained occlusion sensitive Compositional Rendering Network (Renderer) which renders masks and depth fields given a set of object positions (also called states), and a trainable Recurrent Interaction Network (RecIntNet) which predicts positions of objects in future frames. 3.2 Occlusion modeling: the Compositional Rendering Network For each pixels: [x,y] MLP xy [px,py, d,c] Intput objects (3 hidden layers) Occlusion predictor Bilinear interpolation x2 3 x (Convolution 3x3) Lmask Ldepth S Object Renderer Object mask Object depth Scene mask Scene depth coordinates (xk, yk, dk) of object k in a frame together with additional dimensions for intrinsic object properties (shape, color and size) (c). The network predicts object’s binary mask, Mk as well as the depth map Dk. The input vector (xk, yk, dk, ck) ∈ Rl is first copied into a (l+2)×16×16 tensor, where each 16×16 cell position contains an identical copy of the input vector together with x and y coordinates of the cell. Adding the x and y coordinates may seem redundant, but this kind of position field enables a very local computation of the shape of the object and avoids a large number of network parameters (similar architectures were recently also studied in (?)). The input tensor is processed with 1 × 1 convolution filters. The resulting 16-channel feature map is further processed by three blocks of convolutions. Each block contains three convolutions with filters of size 1× 1, 3× 3 and 1× 1 respectively, and 4, 4 and 16 feature maps, respectively. We use ReLU pre-activation before each convolution, and up-sample (scale of 2 and bilinear interpolation) feature maps between blocks. The last convolution outputs N + 1 feature maps of size 128 × 128, the first feature map encoding depth and the N last feature maps encoding mask predictions for the individual objects. The object rendering network is applied to all objects present, resulting in a set of masks and depth maps denoted as {(M̂k, D̂k), k = 1..N}. The Occlusion predictor takes as input the masks and depth maps for N objects and aggregates them to construct the final occlusion-consistent mask and depth map. To do so it computes, for each pixel i, j ≤ 128 and object k the following weight: cki,j = eλD̂ k i,j∑N q=1 e λD̂qi,j , k = 1..N, (1) where λ is a parameter learned by the model. The final masks and depth maps are computed as a weighted combination of masks M̂ki,j and depth maps D̂ki,j for individual objects k: M̂i,j = ∑N k=1 c k i,jM̂ k i,j , D̂i,j = ∑N k=1 c k i,jD̂ k i,j , where i, j are output pixel coordinates ∀i, j ≤ 128 and cki,j the weights given by (1). The intuition is that the occlusion renderer constructs the final output (M̂, D̂) by selecting, for every pixel, the mask with minimal depth (corresponding to the object occluding all other objects). For negative values of λ equation (1) is as a softmin, that selects for every pixel the object with minimal predicted depth. Because λ is a trainable parameter, gradient descent forces it to take large negative values, ensuring good occlusion predictions. Also note that this model does not require to be supervised by the depth field to predict occlusions correctly. In this case, the object rendering network still predicts a feature map D̂ that is not equal to the depth anymore but is rather an abstract quantity that preserves the relative order of objects in the view. This allows Renderer to predict occlusions when the target masks are RGB only. However, it still needs depth information about in the input (either true depth or relative ordering). 3.3 Dynamics prediction: the Recurrent Interaction Network (RecIntNet) To model object dynamics, we build on the Interaction Network (Battaglia et al., 2016), which predicts dynamics of a variable number of objects by modelling their pairwise interactions. Here we describe three extensions of the vanilla Interaction Network model. First, we extend the Interaction Network to model 2.5D scenes where position and velocity have a depth component. Second, we extend the Interaction Network to train from the whole sequence of future states and call this new model Recurrent Interaction Network. Third, we introduce variance in the position predictions, to stabilise the learning phase, and avoid penalizing too much very encertain predictions. The three extensions are described below. Modelling compositional object dynamics in 2.5D scenes. As shown in (Battaglia et al., 2016), Interaction Networks can be used to predict object motion both in 3D or in 2D space. Given a list of objects represented by their positions, velocities and size in the Cartesian plane, an Interaction Network models interactions between all pairs of objects, aggregates them over the image and predicts the resulting motion for each object. Here, we model object interactions in 2.5D space, since we have no access to the object position and velocity in the Cartesian space. Instead we have locations and velocities in the image plane plus depth (the distance between the objects and the camera). Training from a sequence of future frames. The vanilla Interaction Network (Battaglia et al., 2016) is trained to predict position and velocity of each object in one step into the future. Here, we learn from multiple future frames. In detail, we "rollout" the Interaction Network to predict a whole sequence of future states as if a standard Interaction Network was applied in recurrent manner. We found that faster training can be achieved by directly predicting changes in the velocity, hence: [p1, v1, c] = [p0 + δtv0 + δt2 2 dv, v0 + dv, c], (2) where p1 and v1 are position and velocity of the object at time t1, p0 and v0 are position and velocity at time t0, and δt = t1 − t0 is the time step. Position and velocity in pixel space (p = [px, py, d] where px, py are the position of the object in the frame), d is depth and v is the velocity in that space. Hence dv can be seen as the acceleration, and (v0 + dv),(p0 + δtv0 + δt2 2 dv) as the first and second order Taylor approximations of velocity and position, respectively. Assuming an initial weight distribution close to zero, this gives the model a prior that the object motion is linear. Prediction uncertainty. To account for prediction uncertainty and stabilize learning, we assume that object position follows a multivariate normal distribution, with diagonal covariance matrix. Each term σ2x, σ2y, σ2d of the covariance matrix represents the uncertainty in prediction, along x-axis, y-axis and depth. Such uncertainty is also given as input to the model, to account for uncertainty either in object detection (first prediction step) or in the recurrent object state prediction. The resulting loss is negative log-likelihood of the target p1 w.r.t. the multivariate normal distribution, which reduces to L ( (p̂1, τ̂1), p1 ) = (p̂1 − p1)2 exp τ̂1 + τ̂1, (3) where τ̂1 = log σ̂21 is the estimated level of noise propagated through the Recurrent Interaction Network, where σ1 concatenates σ2x, σ2y, σ2d, p1 is the ground truth state and p̂1 is the predicted state at time t+ 1. The intuition is that the squared error term in the numerator is weighted by the estimated level of noise τ̂1, which acts also as an additional regularizer. 3.4 System Training In this section we give a high level description of the procedure for training our model (details are in the supplementary Section S3). The Compositional rendering Network is pre-trained offline from masks and depths in individual frames. Training of the Recurrent Interaction Network is done in the following three steps. First (initialization phase), we select from each training video short clips containing L frames (here, we use L=10). In each frame, we estimate position, depth and size of each object by computing the centroid of each mask, its average depth and diameter (max distance between two mask pixels). To correct errors due to partial occlusions, we perform occlusion-aware refinement of object positions using gradient descent through the pre-trained Renderer (see supplementary material). The result is a partial state vector (no velocities) for each frame, corrected for partial occlusions. Spatially close objects in two consecutive frames are linked and considered the same objects. In a second step (prediction phase), we use the Recurrent Interaction Network to roll out L− 2 predictions for the position of these objects in future frames starting from Frame 2. Frame 1 is used to compute the initial velocities of Frame 2. In a third step, (update phase), we use the distance between the ground truth positions established in step 1 and the rollout positions to perform the training of the Recurrent Interaction Network. 4 Experiments In this section we demonstrate the ability of our model to learn intuitive physics in presence of inter-object occlusions. We evaluate our model on two task: (i) future prediction, predicting objects’ trajectories up to a horizon of 10 frames and (ii) object following, coupling the dynamics network with the neural renderer to follow objects under occlusions, up to a horizon of 30 frames. In future prediction, we initialize the network with two frames, which enable the computation of object positions and velocities based on the instance masks as in the training phase. We then run a roll-out for N consecutive frames with the interaction network. We evaluate this rollout by comparing the predicted positions or reconstructed pixels with the ground truth. In object following, we alternate between short-term rollout (using the interaction network to predict the next frame) and object position refinement (using the renderer). This allows us to put an index on each object, and follow them through large periods of occlusions. During full occlusion, the position is solely determined by the interaction network, since the object position refinement has a zero gradient. During full or partial occlusion, object position refinement is used to reconstruct a better estimate of the positions and velocities. To test object following, we measure the accuracy of the position estimates across long sequences containing occlusions. We also evaluate the ability to detect the violation of object permanence (objects disappearing or appearing out of nowhere). Our evaluation is mostly based on a synthetic dataset, which we release for this paper. We also study generalization to real scenes, and compare to baseline models on the object permanance subset of the intuitive physics benchmark (Riochet et al., 2018). 4.1 Evaluating Future Prediction We use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five datasets, where we vary the camera tilt and the presence of occluders. In the first dataset (“Top view") we record videos with a top camera view (or 90◦), where the borders of the frame coincide with the walls of the box. In the second dataset (“Top view+occ"), we add a large moving object occluding 25% of the scene. Finally, we decrease the camera viewing angle to 45◦, 25◦ and 15◦ degrees, which results in an increasing amount of inter-object object occlusions due to perspective projection of the 3D scene onto a 2D image plane. We computed the proportion of time each object is occluded or partially occluded and found 3.1% in the top-view videos, 31.1% in the top view occluded videos, and 5.9%, 11.7%, 13.4% in the 45◦, 25◦, 15◦ tilted videos, respectively. Additional details of the datasets are given in the supplementary material. Inter-object occlusion investigation. In this section we consider prediction horizons of 5 and 10 frames, and evaluate the position error as a L2 distance between the predicted and target object positions. L2 distance is computed in the 3D Cartesian scene coordinates, such that results are comparable across different camera tilts. Results are shown in Table 1. We first note that our model trained on mask and depth prediction significantly outperforms the linear baseline, which is computed as an extrapolation of the position of objects based on their initial velocities. Moreover, the results of our method are relatively stable across challenging setups with occlusions by external objects or frequent self-occlusions in tilted views. This demonstrates the potential ability of our method to be trained from real videos where occlusions and other factors usually prevent reliable recovery of object states. Ablation Studies. As an ablation study we replace the Recurrent Interaction Network (RecIntNet) in our model with a multi-layer perceptron. This MLP contains four hidden layers of size 180 and is trained the same way as RecIntNet, modelling acceleration as described in equation 3.3. To deal with the varying number of objects in the dataset, we pad the inputs with zeros. We observe that RecIntNet allows more robust predictions through time. As a second ablation study, we train the Recurrent Interaction Network without modelling acceleration (3.3). This is similar to the model described in (Janner et al., 2018), where object representation is not decomposed into position / velocity / intrinsic properties, but is 1https://pypi.org/project/pybullet rather a (unstructured) 256-dimensional vector. We observe a significant loss in performance, tending to confirm that modelling position and velocity explicity, and having a constant velocity prior on motion (given by 3.3) improves future predictions. As a third ablation study, we train a deterministic variant of RecIntNet, where only the sequence of states is predicted, without the uncertainty term τ . The loss considered is the mean squared error between the predicted and the observation state. Observed results are slightly worse than our model handling uncertainty (see NoProba-RIN), but close enough to say that this is not a key feature for modelling 5 or 10 frames in the future. In qualitative experiments, however, we observed more robust long-term predictions after introducing the uncertainty term τ in the model and the loss (equation 3). For the purpose of comparison, we also evaluate three models trained using ground truth object states. Our Recurrent Interaction Network trained on ground truth object states gives similar results to the model of (Battaglia et al., 2016). As expected, training on ground truth states (effectively ignoring occlusions and other effects) performs better than training from object masks and depth. We also compare with CNN autoencoder (Riochet et al., 2018), showing our models gives better forward mask and depth predictions than CNN auto-encoders trained end-to-end. Full results are given in the supplementary material. Generalization to real scenes. We construct a dataset of 22 real videos, containing a variable number of colored balls and blocks in motion. Videos are recorded with a Microsoft kinect2 device, including RGB and depth frames. The setup is similar to the one generated with pybullet, recorded with a top camera view and containing 4 balls and a variable number of blocs (from 0 to 3). Here again, the borders of the frame coincide with the walls of the box. Taking as input object segmentation of the first two frames, we use our model to predict object trajectories through the whole video (see Figure 4). We use the model trained on top-view pybullet videos, without fine-tuning weights. We measure the error between predictions and ground truth positions along the rollout. Results are shown in Table 2 and clearly demonstrate that out approach outperforms the linear and MLP baselines. 4.2 Evaluating object following Evaluation on long roll-outs. We ran at test time longer roll-outs (up to 30 frames), iteratively corrected by our occlusion-aware refinement procedure. This can be viewed as a form of tracking evaluation. Table 3 shows the percentage of object predictions that diverge by more than an object diameter (20 pixels) using this method. The performance is very good, even for tilted views. Supplementary Figure 1 shows these numbers as a function of the pixel threshold. Roll-outs (without refinement) are provided in the following anonymous google drive (link), showing qualitatively convincing trajectories and bouncing behaviors, even for tilted views. Evaluation on IntPhys benchmark. Riochet et al. (2018) propose a benchmark to evaluate intuitive physics models. Drawing inspiration from infant development studies, the benchmark consists of classifying whether a particular video is physically possible or impossible. We focus on the task O1 / Occluded / Dynamic_1, evaluating the notion of object permanence in presence of occlusions. From the provided train set 2 containing only possible videos, we train our model to predict a sequence of 8 frames from an input pair of frames. The evaluation subset O1 / Occluded / Dynamic_1 contains on 720 videos, forming 180 quadruplet of (2 possible / 2 impossible) videos. Starting from the first visible position of an object, we predict its trajectory until the end of the video, refining prediction at every time step. For each video, the predicted masks are compared with the observed masks, resulting in a sequence of reconstruction errors. We derive an implausibility score for a video as the maximum error through the whole sequence. For each quadruplet of (2 possible / 2 impossible) videos, we classify the two videos that have the highest implausibility score as impossible, the two other as possible. Table 4 reports error rates, in comparison with baselines from Riochet et al. (2018). We can see a clear improvement of our method, confirming it can follow objects through long occlusions. 5 Discussion Learning the physics of simple macroscopic object dynamics and interactions is a relatively easy task when ground truth coordinates are provided to the system, and techniques like Interaction Networks trained with a future frame prediction loss are quite successful (Battaglia et al., 2016). Of course a major drawback of this kind of system is that it is basically restricted 2www.intphys.com to learning physics from 3D simulators. In real life, the ground truth coordinates of each object are unknown, only projected 2D views are available. Interestingly, we found that projective geometry is not, in and of itself, a difficulty. Indeed, when an Interaction Network is fed, not with 3D Cartesian object coordinates, but with a 2.5D projective referential such as the xy position of objects in a retina (plus depth), the accuracy of the prediction remains unchanged compared with the Cartesian ground truth. As RGBD videos are relatively easy to collect in large quantities, it would not be difficult to train systems with such inputs. But real world videos raise two other major difficulties: (i) images are not easily segmentable into objects, and (ii) objects do not remain always visible and tend to be occluded by other objects. This makes the ground truth coordinates of objects only partially observable. Here, we provided a first step towards more realistic physics learning by addressing the occlusion problem. We introduce a physics learning system composed of an Interaction Network followed by a trainable Renderer. The Interaction Network has been made recurrent, such that ground truth positions and velocities have only to be fed at the first frame. The renderer can be qualified as 2.5D, in that it takes as input the positions and velocities of objects (in retina pixel xy-d coordinates) and computes the resulting instance masks (plus depth). The 2.5D renderer is itself relatively lightweight (only 1233 parameters). It is based on a rather simple convolutional architecture, uses position fields, and can be trained with few examples to render objects of arbitrary shapes with controlable 2.5D positions respecting occlusions. The outcome can be seen on the rendering of tilted views as shown in the videos provided in the anonymous google drive (link). What we showed is that instead of training the interaction network to predict ground truth positions, we can directly train it through estimates obtained from mask+depth, corrected through the renderer. The resulting system, of course, produces less accurate predictions that when trained with real positions, but is still better than either linear baselines, or CNN mask prediction networks by (Riochet et al., 2018; Lerer et al., 2016). Interestingly, the reconstruction loss is still effective even in the presence of external occluders, or when objects occlude each other because of a tilted view during training. This can be explained by the fact that when an object is occluded, the gradient of the reconstruction loss will be zero (because no matter where the object is predicted to be, so long as it is predicted to be behind another object, it is not visible, hence contributes to no loss). This amounts to simply reducing the size of the training set, so it only slightly degrades the final performance. Importantly, this cancellation of the losses occurs without explicitly telling the system which objects are occluded and which are not. This is implicitly learnt by the system through the rendering network. Applying this method to the intuitive physics benchmark presented in (Riochet et al., 2018), we show it outperforms baselines on modelling object dynamics under occlusions. Further work needs to be done to fully train this system end-to-end, in particular, by learning the renderer and the interaction network jointly. Another avenues relate to the first problem raised above, i.e. the segmentation problem. Object segmentation based on raw pixels has been addressed in previous work, but yields errors (over- or under-segmentations) on more realistic datasets, which could have dramatic effect on the interaction network, which crucially depends on reliable object identification. Such issues need to be addressed before end-to-end physics prediction systems can be trained and used with real videos, and approximate the ability of infants to predict the interactions of objects from live or video scenes. S1. Description of supplementary videos The videos are in the anonymous google drive: https://drive.google.com/drive/ folders/111XK6GZnmHjd_7US6LGkxg6cAhJ2WDxQ?usp=sharing in the videos/ subdirectory. See also the README slideshow. • scene_overview.mp4 shows raw videos of the entire environment. • tracking_occlusions_*.mp4 show examples of position prediction through complete occlusions, using our occlusion-aware object position refinement. This shows that our model can keep track of the object identity through complete occlusions, mimicking “object permanence". • one_class*.mp4 show different examples of our model following motion of multiple objects in the scene. All balls have the same color which makes them difficult to follow in case of mutual interactions. Videos come from tilted 25◦ experiments, which are the most challenging because they include inter-object occlusions. Dots represent the predicted position of each object, the color being its identity. Our model shows very good predictions with small colored markers (dots) well centered in the middle of each object, with marker color remaining constant for each object preserving the object identity during occlusions and collisions. one_class_raw*.mp4 show rendered original views of the same dynamic scenes but imaged from a different viewpoint for better understanding. • rollout_0.mp4, rollout_1.mp4 show three different rollouts without position refinement. From left to right: ground truth trajectories, our model trained of state, our model trained on masks, our model trained on masks with occlusions during training. Rollout length is 20 frames. • rollout_tilt*_model.mp4 and rollout_tilt*_groundtruth.mp4 show the same dynamic scene but observed with various camera tilts (e.g. tilt45_model.mp4 show a video for a camera tilt of 45 degrees). *_model.mp4 are rollouts of our Recurrent Interaction Network (RecIntNet) computed without the occlusion-aware position refinement based on the observed masks (pure forward prediction of the dynamics model). *_groundtruth.mp4 are the corresponding ground-truth trajectories, rendered with the Compositional Rendering Network. • intphys_*.mp4 show object following in the IntPhys training set. • rollout_pybullet_*.mp4 show free rollout (no refinement) on synthetic dataset. • rollout_real_*.mp4 show generalization to real scenes. S2. Datasets To validate our model, we use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five dataset versions, where we vary the camera tilt and the presence of occluders. All experiments are made with datasets of 12,000 videos of 30 frames (with a frame rate of 20 frames per second). For each dataset, we keep 2,000 videos separate to pre-train the renderer, 9, 000 videos to train the physics predictor and 1, 000 videos for evaluation. Our scene contains a variable number of balls (up to 6) with random initial positions and velocities, bouncing against each other and the walls. Initial positions are sampled from a uniform distribution in the box [1, 200]2, all balls lying on the ground. Initial velocities along x and y axes are sampled in Unif([−25, 25]) units per frame, initial velocity along z-axis is set to 0. The radius of each ball is sampled uniformly in [10, 40]. Scenes also contain a variable number of boxes (up to 2) fixed to the floor, against which balls can collide. Contrary to Battaglia et al. (2016) where authors set a frame rate of 1000 frames per second, we sample 30 frames per second, which is more reasonable when working with masks (because of the computation cost of mask prediction). Top-view. In the first dataset we record videos with a top camera view, where the borders of the frame coincide with the walls of the box. Here, initial motion is orthogonal to the camera, which makes this dataset very similar to the 2D bouncing balls datasets presented in Battaglia et al. (2016) and Watters et al. (2017). However, our dataset is 3D and because of collisions and the fact that the balls have different sizes, balls can jump on top of each other, making occlusions possible, even if not frequent. Top-view with Occlusions. To test the ability of our method to learn object dynamics in environments where occlusions occur frequently, we record the second dataset including frequent occlusions. We add an occluder to the scene, which is an object of irregular shape (an airplane), occluding 25% of the frame and moving in 3D between the balls and the camera. This occluder has a rectilinear motion and goes from the bottom to the top of the frame during the whole video sequence. Sample frames and rendered predictions can be found in the supplementary material. Tilted-views. In three additional datasets we keep the same objects and motions but tilt the camera with angles of 45◦, 65◦ and 75◦ degrees. Increasing the tilt of the camera results in more severe inter-object occlusions (both partial and complete) where the balls pass in front of each other, and in front and behind the static boxes, at different distances to the camera. In addition, the ball trajectories are becoming more complex due to increasing perspective effects. In contrary to the top-view experiment, the motion is not orthogonal to the camera plane anymore, and depth becomes crucial to predict the future motion. S3. Training details This section gives details of the offline Pre-Training of the compositional Rendering Network and detailed outline of the algorithm for training the Recurrent Interaction Network. Pre-Training the Compositional Rendering Network. We train the neural renderer to predict mask and depth M̂t, D̂t from a list of objects [px, py, d, c] where px, py are x-y coordinates of the object in the frame, d is the distance between the object and the camera and c is a vector for intrinsic object properties containing the size of the object, its class (in our experiments a binary variable for whether the object is a ball, a square or an occluder) and its color as vector in [0, 1]3. The target mask is a 128 × 128 image where each pixel value indicates the index of the corresponding object mask (0 for the background, i ∈ 1..N for objects). The loss on the 1https://pypi.org/project/pybullet mask is negative log-likelihood, which corresponds to the average classification loss on each pixel Lmask(M̂,M) = ∑ i≤h,j≤w ∑ n≤N 1(Mi,j = n)log(M̂i,j,n), (1) where the first sum is over individual pixels indexed by i and j, the second sum is over the individual objects indexed by n, ∀M̂ ∈ [0, 1]h×w×N are the predicted (soft-) object masks, and ∀M ∈ [[1, N []h×w is the scene ground truth mask containing all objects. The target depth map is a 128× 128 image with values being normalized to the [-1,1] interval during training. The loss on the depth map prediction is the mean squared error Ldepth(D̂,D) = ∑ i≤h,j≤w (D̂i,j −Di,j)2, (2) where ∀D̂ and D ∈ Rh×w are the predicted and ground truth depth maps, respectively. The final loss used to train the renderer is the weighted sum of losses on masks and depth maps, L = 0.7 ∗ Lmask + 0.3 ∗ Ldepth. We use the Adam optimizer with default parameters, and reduce learning rate by a factor 10 each time the loss on the validation set does not decrease during 10 epochs. We pre-train the network on a separate set of 15000 images generated with pybullet and containing similar objects as in our videos. Training details of the Recurrent Interaction Network. The detailed outline of training the Recurrent Interaction Network is given in Algorithm 1. Algorithm 1: Train Recurrent Interaction Network Data: T, L: length of the video and prediction span, respectively; Mt, t = 1..T : Instance masks; Dt, t = 1..T : Depth maps; Rend: Pre-trained Renderer; RIN: Recurrent Interaction Network (initialized with constant velocity motion); Criterion: stopping criterion (RIN loss on validation); Detection(mt,dt): returns centroid, depth and size of instance masks; NLL, MSE: Negative Log-Likelihood and Mean-Squarred Error, respectively; Result: Trajectory estimates p̄t+1..t+L; Trained Recurrent Interaction Network wRIN; while Criterion(RIN) do for t ∈ {1..T − 1} do // Initialization of positions and velocities // Initial object positions from observed masks and depths p̂t ← Detection(mt, dt); // Occlusion-aware object position refinement using Renderer p̄t ← arg minp←p̂t NLL(Rend(p), (mt, dt)); // Estimate object velocities from consecutive frames v̄t ← p̄t+1 − p̄t ; for t ∈ {1..T − L} do // Training Recurrent Interaction Network // Predict sequence of states (of all objects) using roll-out p̂t+1..t+L ← RIN(p̄t, v̄t); // Occlusion-aware object position refinement p̄t+1..t+L ← arg minp←p̂t+1..t+L NLL(Rend(p), (mt, dt)); // Update weights of Recurrent Interaction Network wRIN ← arg minw MSE(RIN(p̄t), p̂t+1..t+L); Given an initial state st, the Recurrent Interaction Network recursively predicts a sequence of future states ŝt+1, ŝt+2, ..., ŝt+L, as well as error terms τ̂t+1, τ̂t+2, ..., τ̂t+L. This predicted sequence is compared to object positions (ground truth or derived from masks after refinement), and the loss is computed as the sum of negative log likelihood (??) along the sequence. Top view Top view+ 45◦ tilt 25◦ tilt 15◦ tilt occlusion CNN autoencoder Riochet et al. (2018) 0.0147 0.0451 0.0125 0.0124 0.0121 RIN, trained on mask+depth 0.0101 0.0342 0.0072 0.0070 0.0069 Proba-RIN, trained on mask+depth 0.0100 0.0351 0.0069 0.0071 0.0065 Table S1. Aggregate pixel reconstruction error for mask and depth, for a prediction span of two frames. This error is the loss used for training (described in the supplementary material). It is a weighted combination of mask error (per-pixel classification error) and the depth error (mean squared error). We use Adam optimizer, and divide the learning rate by L to be consistent with the size of the sequence (as the loss is a sum over a sequence of length L). The same learning rate decay and stopping procedure is applied. Sequence lengths of 4, 6 and 10 were tested during training, lengths of 10 giving slightly more stable rollouts. S4. Occlusion-aware refinement of object positions Position refinement consists of using the pre-trained Renderer to correct estimated positions of all objects in a particular frame. To do so, we give the position estimates as input to the Renderer which outputs a corresponding pair of mask and depth field for the frame, (M̂, D̂), properly rendering the inter-object occlusions. This prediction is compared to the observed mask and depth field, returning errors that are backpropagated through the frozen weights of the Renderer. We perform gradient descent on the input itself to correct object position and size estimates, according to the observations. In our experiments, we set learning rate to 0.01 and compute 200 iterations of gradient descent. Details of the loss are given in the supplementary material. For object positions estimated from object masks, this refinement allows us to reduce errors due to partial occlusions (moving the predicted center of one object from its visible mask centroid to its real center). S4. Future prediction: Comparison with Riochet et al. (2018) We evaluate the error of the mask and depth prediction, measured by the training error described in detail in . Here, we compare our model to a CNN autoencoder Riochet et al. (2018), which directly predicts future masks from current ones, without explicitly modelling dynamics of the individual objects in the scene. Note this baseline is similar to Lerer et al. (2016). Results are shown in Table S1. As before, the existence of external occluders or the presence of tilt degrades the performance, but even in this case, our model remains much better than the CNN autoencoder of Riochet et al. (2018). S5. Detailed roll-out results In Figure S1, we report the proportion of correctly followed objects for different rollout lengths (5, 10 and 30 frames) as a function of the distance error (pixels). Note that the size of the smallest object is around 20 pixels.
1. What is the main contribution of the paper regarding image patches dynamics prediction? 2. What are the weaknesses of the proposed approach, particularly in terms of object segmentation and training requirements? 3. Do you have any concerns about the writing clarity and understanding of the model and training procedure? 4. How does the reviewer assess the novelty and practicality of the work?
Review
Review The key contribution of this paper is a model that can predict the dynamics of pre-segmented image patches under multiple frames of occlusion. The input image is processed by a CNN, the dynamics are predicted by a recurrent interaction net, and the output image is generated by a (deconv) CNN. The key weaknesses I see are: - The objects must be pre-segmented by some externally defined mechanism. Where does this mechanism come from? Segmenting the objects is challenging, and there are various recent methods that explore how to learn to do this (van Steenkiste et al., 2018). But if one has the segmentation masks, that simplifies things considerably and also offers a good estimate of the location and velocity (if there are 2+ frames). - During training, the error is computed on all frames, including occluded ones, and backpropagated into the weights. But if I understand this correctly, this means that for training you need access to ground truth rendered trajectories. It would be better if the model didn't require the ground truth segmentations for objects that are occluded. How would they be made available to a learning system? - Generally the writing wasn't that clear and I struggled to understand some details of the model and training procedure. Overall I don't believe this work is ready for publication, as there isn't that much novelty and the requirements are impractical.
ICLR
Title Occlusion resistant learning of intuitive physics from videos Abstract To reach human performance on complex tasks, a key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation. This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences. Yet, most these methods are restricted to the case where no occlusions occur, narrowing the potential areas of application. The main contribution of this paper is a method combining a predictor of object dynamics and a neural renderer efficiently predicting future trajectories and explicitly modelling partial and full occlusions among objects. We present a training procedure enabling learning intuitive physics directly from the input videos containing segmentation masks of objects and their depth. Our results show that our model learns object dynamics despite significant inter-object occlusions, and realistically predicts segmentation masks up to 30 frames in the future. We study model performance for increasing levels of occlusions, and compare results to previous work on the tasks of future prediction and object following. We also show results on predicting motion of objects in real videos and demonstrate significant improvements over state-of-the-art on the object permanence task in the intuitive physics benchmark of Riochet et al. (2018). 1 Introduction Learning intuitive physics has recently raised significant interest in the machine learning literature. To reach human performance on complex visual tasks, artificial systems need to understand the world in terms of macroscopic objects, movements, interactions, etc. Infant development experiments show that young infants quickly acquire an intuitive grasp of how objects interact in the world, and that they use these intuitions for prediction and action planning (Carey, 2009; Baillargeon & Carey, 2012). This includes the notions of gravity (Carey, 2009), continuity of trajectories (Spelke et al., 1995), collisions (Saxe & Carey, 2006), etc. Object permanence, the fact that an object continues to exist when it is occluded, (Kellman & Spelke, 1983), is one of the first concepts developed by infants. From a modeling point of view, the key scientific question is how to develop general-purpose methods that can make physical predictions in noisy environments, where many variables of the system are unknown. A model that could mimic even some of infant’s ability to predict the dynamics of objects and their interactions would be a significant advancement in model-based action planning for robotics (Agrawal et al., 2016; Finn & Levine, 2017). Importantly, to be applied to real-world problems, such a model needs to predict object motion in 3D and handle frequent inter-object occlusions. Yet, to our knowledge, most current works on learning intuitive physics get around this challenge by either i) working in 2D spaces with no occlusions (Battaglia et al., 2016; Chang et al., 2016; Fragkiadaki et al., 2015) or ii) learning end-to-end models without decomposing the scene into objects (Agrawal et al., 2016; Lerer et al., 2016; Finn et al., 2016). The former methods have demonstrated that learning models of intuitive physics is possible but assume ground truth positions of objects are available at both training and test time. The latter methods can operate directly on pixel inputs without knowing ground truth positions of objects but are typically limited to a small number of objects and generalize poorly to new setups (e.g. a new number of objects in the scene, see (Lerer et al., 2016)). A third class of methods has recently emerged (Janner et al., 2018) that first decomposes the input image of the 3D scene into layers corresponding to masks of individual objects and learns scene dynamics given such object-centric decomposition. Note that here the object dynamics is learnt from pixel masks of individual objects, rather than their ground truth positions. This is difficult for 3D scenes due to frequent inter-object occlusions that present two major challenges. First, estimating accurate position and velocity of objects is challenging due to partial occlusions by other objects. Second, objects can be fully occluded by other objects for a significant number of frames. This work falls into the third class of compositional methods, but develops an occlusion resistant model for learning intuitive physics that addresses both of these challenges due to inter-object occlusions. In detail we propose a compositional model that from object instance masks and depth fields in two consecutive frames, (Mt,t+1, Dt,t+1), estimates the center, velocity and size of objects. This predicted state ŝt is then used as input of a Recurrent Interaction Network, which predicts a sequence of futures states ŝt+2, ..., ŝt+L. This sequence of states is given to the Compositional Rendering Network which produces segmentation masks M̂t+2, ..., M̂t+L and depth estimates D̂t+2, ..., D̂t+L in future frames. The key innovation of the proposed model is dealing with partial and complete occlusions in the scene. To deal with partial occlusions, the obtained sequence of masks+depths is compared to the ground truth, and gradients are backpropagated through the pre-trained Compositional Rendering Network to refine state predictions. This allows us to refine positions of partially occluded objects where simply taking the centroid of the observed portion of the mask results in an incorrect estimate of the object position. With this refinement object positions are corrected taking into account the unobserved (occluded) portion of the object. The refined state estimates s̄t+1, ..., s̄t+L are used at training time for learning parameters of the Recurrent Interaction Network and at test time to improve accuracy of object position prediction when following partially occluded objects. To deal with full occlusions, when the object is not visible in multiple frames, we use the learnt model of object dynamics (Recurrent Interaction Network) to predict the position of the object multiple frames ahead and thus recovering the object position after the occlusion. Using the proposed approach, we show that it is possible to learn object dynamics in 3D environments with severe inter-object occlusions and predict segmentation masks up to 30 frames in the future despite occlusion other objects thus mimicking object permanence. 2 Related work Forward modelling in videos. Forward modelling in video has been studied for action planning (Ebert et al., 2018; Finn et al., 2016) and as a scheme for unsupervised learning of visual features (Lan et al., 2014; Mathieu et al., 2015). In that setup, a model is given a sequence of frames and has to generate frames in future time steps. To succeed in this task, such models need to predict object movements, suggesting that they need to learn physical regularities from video. However, models for end-to-end future frame prediction tend to perform poorly on long-term prediction tasks (say more 5-8 frames (Lan et al., 2014; Mathieu et al., 2015; Finn et al., 2016)), failing to preserve object properties and generating blurry outputs. This suggests that models for intuitive physics may require a more structured representation of objects and their interactions. Learning dynamics of objects. Longer term predictions can be more successful when done on the level of trajectories of individual objects. For example, in (Wu et al., 2017b), the authors propose "scene de-rendering", a system that builds an object-based, structured representation from a static (synthetic) image. The recovered state can be further used for physical reasoning and future prediction using a physics engine on both synthetic and real data (Battaglia et al., 2013; Wu et al., 2017a). Future prediction from static image is often multi-modal (e.g. car can move forward or backward) and hence models able to predict multiple possible future predictions, e.g. based on variational auto-encoders (Xue et al., 2016), are needed. Others have developed structured models that factor object motion and object rendering into two learnable modules. Examples include (Watters et al., 2017; Marco Fraccaro, 2017; Ehrhardt et al., 2017b;a) that combine object-centric dynamic models and visual encoders. Such models parse each frame into a set of object state representations, which are used as input of a "dynamic" model, predicting object motion. However, (Marco Fraccaro, 2017) restrict drastically the complexity of the visual input by working on binary 32x32 frames, and (Ehrhardt et al., 2017b;a; Watters et al., 2017) still need ground truth position of objects to train their models. None of these work explicitly models inter-object occlusions, which is the focus of our method. In our work, we build on learnable models of object dynamics (Battaglia et al., 2016) and (Chang et al., 2016), which have the key property that they are compositional and hence can model a variable number of objects, but extend them to learn from visual input rather than ground truth object state vectors. Our work is related to (Janner et al., 2018), done independently and concurrently with our work, who develop an object-oriented model of dynamics coupled with a differentiable object renderer to predict a single image with segmentation masks of objects in a future time, given a single still image as input. In contrast, our model predicts frame-by-frame object motion in scenes with partial and full object occlusion. This is possible because (i) our model of dynamics is recursive, predicting a whole sequence of object movements (instead of one single image in future (Janner et al., 2018)) that allows the model to be applied recursively to follow an object through complete occlusion by other objects; (ii) we design a refinement procedure that allows to refine the estimated positions of objects in case of partial occlusions. In addition, in contrast to (Janner et al., 2018) our model predicts velocity of objects and depth of the scene (also taking as input a pair of frames and the depth field). Others have proposed unsupervised methods to discover objects and their interactions in 2d videos (van Steenkiste et al., 2018). It is also possible to construct Hierarchical Relation Networks (Mrowca et al., 2018), representing objects as graphs and predicting interactions between pairs of objects. However, this task is still challenging and requires full supervision in the form of ground truth position and velocity of objects. Learning physical properties from visual inputs. Related are also methods for learning physical properties of objects. Learning of physical properties, such as mass, volume or coefficients of friction and restitution, has been considered in (Wu et al., 2016). Others have looked at predicting the stability and/or the dynamics of towers of blocks (Lerer et al., 2016; Zhang et al., 2016; Li et al., 2016a;b; Mirza et al., 2017; Groth et al., 2018). Our work is complementary. We don’t consider prediction of physical properties but focus on learning models of object dynamics handling inter-object occlusions at both training and test time. (Greff et al., 2019) Contributions. We describe a model that learns complex dynamics of objects in a 3D environment, where inter-object occlusions occur frequently. Our model combines an abstract representation of the scene (position, velocity and depth of objects), with a compositional neural renderer predicting the resulting object masks with depth and explicitly modelling occlusions between objects. This procedure allows us to train the model even when some objects are partially or totally occluded. Unlike (Watters et al., 2017), our model is fully compositional and handles variable number of objects in the scene. Moreover, it does not require as input annotated inter-frame correspondences during training. 3 Occlusion resistant modeling for intuitive physics This section describes our model for occlusion resistant learning of intuitive physics. We first describe the learning set-up considered in this work. We then describe in detail the two main components of our model. In section 3.2 we outline the compositional renderer with occlusion reasoning that predicts object masks given a scene state representation, and in section 3.3 we detail the recurrent interaction network that predicts the scene state evolution over time. Finally, in section 3.4 we outline the training procedure. 3.1 Set-up overview As illustrated in Figure 1 (and Algorithm in the Supplementary Material), during learning our method observes a sequence of object instance masks and depth fields Mt,..,t+L, Dt,..,t+L. The mask for each frame is composed of a set of channels where each channel represents pixels corresponding to an individual object, along with their color and shape (boxes or balls of different sizes). The model does not require the knowledge of correspondence between objects over time, which might be difficult to obtain in practice. Our model is composed of two networks described below: a pre-trained occlusion sensitive Compositional Rendering Network (Renderer) which renders masks and depth fields given a set of object positions (also called states), and a trainable Recurrent Interaction Network (RecIntNet) which predicts positions of objects in future frames. 3.2 Occlusion modeling: the Compositional Rendering Network For each pixels: [x,y] MLP xy [px,py, d,c] Intput objects (3 hidden layers) Occlusion predictor Bilinear interpolation x2 3 x (Convolution 3x3) Lmask Ldepth S Object Renderer Object mask Object depth Scene mask Scene depth coordinates (xk, yk, dk) of object k in a frame together with additional dimensions for intrinsic object properties (shape, color and size) (c). The network predicts object’s binary mask, Mk as well as the depth map Dk. The input vector (xk, yk, dk, ck) ∈ Rl is first copied into a (l+2)×16×16 tensor, where each 16×16 cell position contains an identical copy of the input vector together with x and y coordinates of the cell. Adding the x and y coordinates may seem redundant, but this kind of position field enables a very local computation of the shape of the object and avoids a large number of network parameters (similar architectures were recently also studied in (?)). The input tensor is processed with 1 × 1 convolution filters. The resulting 16-channel feature map is further processed by three blocks of convolutions. Each block contains three convolutions with filters of size 1× 1, 3× 3 and 1× 1 respectively, and 4, 4 and 16 feature maps, respectively. We use ReLU pre-activation before each convolution, and up-sample (scale of 2 and bilinear interpolation) feature maps between blocks. The last convolution outputs N + 1 feature maps of size 128 × 128, the first feature map encoding depth and the N last feature maps encoding mask predictions for the individual objects. The object rendering network is applied to all objects present, resulting in a set of masks and depth maps denoted as {(M̂k, D̂k), k = 1..N}. The Occlusion predictor takes as input the masks and depth maps for N objects and aggregates them to construct the final occlusion-consistent mask and depth map. To do so it computes, for each pixel i, j ≤ 128 and object k the following weight: cki,j = eλD̂ k i,j∑N q=1 e λD̂qi,j , k = 1..N, (1) where λ is a parameter learned by the model. The final masks and depth maps are computed as a weighted combination of masks M̂ki,j and depth maps D̂ki,j for individual objects k: M̂i,j = ∑N k=1 c k i,jM̂ k i,j , D̂i,j = ∑N k=1 c k i,jD̂ k i,j , where i, j are output pixel coordinates ∀i, j ≤ 128 and cki,j the weights given by (1). The intuition is that the occlusion renderer constructs the final output (M̂, D̂) by selecting, for every pixel, the mask with minimal depth (corresponding to the object occluding all other objects). For negative values of λ equation (1) is as a softmin, that selects for every pixel the object with minimal predicted depth. Because λ is a trainable parameter, gradient descent forces it to take large negative values, ensuring good occlusion predictions. Also note that this model does not require to be supervised by the depth field to predict occlusions correctly. In this case, the object rendering network still predicts a feature map D̂ that is not equal to the depth anymore but is rather an abstract quantity that preserves the relative order of objects in the view. This allows Renderer to predict occlusions when the target masks are RGB only. However, it still needs depth information about in the input (either true depth or relative ordering). 3.3 Dynamics prediction: the Recurrent Interaction Network (RecIntNet) To model object dynamics, we build on the Interaction Network (Battaglia et al., 2016), which predicts dynamics of a variable number of objects by modelling their pairwise interactions. Here we describe three extensions of the vanilla Interaction Network model. First, we extend the Interaction Network to model 2.5D scenes where position and velocity have a depth component. Second, we extend the Interaction Network to train from the whole sequence of future states and call this new model Recurrent Interaction Network. Third, we introduce variance in the position predictions, to stabilise the learning phase, and avoid penalizing too much very encertain predictions. The three extensions are described below. Modelling compositional object dynamics in 2.5D scenes. As shown in (Battaglia et al., 2016), Interaction Networks can be used to predict object motion both in 3D or in 2D space. Given a list of objects represented by their positions, velocities and size in the Cartesian plane, an Interaction Network models interactions between all pairs of objects, aggregates them over the image and predicts the resulting motion for each object. Here, we model object interactions in 2.5D space, since we have no access to the object position and velocity in the Cartesian space. Instead we have locations and velocities in the image plane plus depth (the distance between the objects and the camera). Training from a sequence of future frames. The vanilla Interaction Network (Battaglia et al., 2016) is trained to predict position and velocity of each object in one step into the future. Here, we learn from multiple future frames. In detail, we "rollout" the Interaction Network to predict a whole sequence of future states as if a standard Interaction Network was applied in recurrent manner. We found that faster training can be achieved by directly predicting changes in the velocity, hence: [p1, v1, c] = [p0 + δtv0 + δt2 2 dv, v0 + dv, c], (2) where p1 and v1 are position and velocity of the object at time t1, p0 and v0 are position and velocity at time t0, and δt = t1 − t0 is the time step. Position and velocity in pixel space (p = [px, py, d] where px, py are the position of the object in the frame), d is depth and v is the velocity in that space. Hence dv can be seen as the acceleration, and (v0 + dv),(p0 + δtv0 + δt2 2 dv) as the first and second order Taylor approximations of velocity and position, respectively. Assuming an initial weight distribution close to zero, this gives the model a prior that the object motion is linear. Prediction uncertainty. To account for prediction uncertainty and stabilize learning, we assume that object position follows a multivariate normal distribution, with diagonal covariance matrix. Each term σ2x, σ2y, σ2d of the covariance matrix represents the uncertainty in prediction, along x-axis, y-axis and depth. Such uncertainty is also given as input to the model, to account for uncertainty either in object detection (first prediction step) or in the recurrent object state prediction. The resulting loss is negative log-likelihood of the target p1 w.r.t. the multivariate normal distribution, which reduces to L ( (p̂1, τ̂1), p1 ) = (p̂1 − p1)2 exp τ̂1 + τ̂1, (3) where τ̂1 = log σ̂21 is the estimated level of noise propagated through the Recurrent Interaction Network, where σ1 concatenates σ2x, σ2y, σ2d, p1 is the ground truth state and p̂1 is the predicted state at time t+ 1. The intuition is that the squared error term in the numerator is weighted by the estimated level of noise τ̂1, which acts also as an additional regularizer. 3.4 System Training In this section we give a high level description of the procedure for training our model (details are in the supplementary Section S3). The Compositional rendering Network is pre-trained offline from masks and depths in individual frames. Training of the Recurrent Interaction Network is done in the following three steps. First (initialization phase), we select from each training video short clips containing L frames (here, we use L=10). In each frame, we estimate position, depth and size of each object by computing the centroid of each mask, its average depth and diameter (max distance between two mask pixels). To correct errors due to partial occlusions, we perform occlusion-aware refinement of object positions using gradient descent through the pre-trained Renderer (see supplementary material). The result is a partial state vector (no velocities) for each frame, corrected for partial occlusions. Spatially close objects in two consecutive frames are linked and considered the same objects. In a second step (prediction phase), we use the Recurrent Interaction Network to roll out L− 2 predictions for the position of these objects in future frames starting from Frame 2. Frame 1 is used to compute the initial velocities of Frame 2. In a third step, (update phase), we use the distance between the ground truth positions established in step 1 and the rollout positions to perform the training of the Recurrent Interaction Network. 4 Experiments In this section we demonstrate the ability of our model to learn intuitive physics in presence of inter-object occlusions. We evaluate our model on two task: (i) future prediction, predicting objects’ trajectories up to a horizon of 10 frames and (ii) object following, coupling the dynamics network with the neural renderer to follow objects under occlusions, up to a horizon of 30 frames. In future prediction, we initialize the network with two frames, which enable the computation of object positions and velocities based on the instance masks as in the training phase. We then run a roll-out for N consecutive frames with the interaction network. We evaluate this rollout by comparing the predicted positions or reconstructed pixels with the ground truth. In object following, we alternate between short-term rollout (using the interaction network to predict the next frame) and object position refinement (using the renderer). This allows us to put an index on each object, and follow them through large periods of occlusions. During full occlusion, the position is solely determined by the interaction network, since the object position refinement has a zero gradient. During full or partial occlusion, object position refinement is used to reconstruct a better estimate of the positions and velocities. To test object following, we measure the accuracy of the position estimates across long sequences containing occlusions. We also evaluate the ability to detect the violation of object permanence (objects disappearing or appearing out of nowhere). Our evaluation is mostly based on a synthetic dataset, which we release for this paper. We also study generalization to real scenes, and compare to baseline models on the object permanance subset of the intuitive physics benchmark (Riochet et al., 2018). 4.1 Evaluating Future Prediction We use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five datasets, where we vary the camera tilt and the presence of occluders. In the first dataset (“Top view") we record videos with a top camera view (or 90◦), where the borders of the frame coincide with the walls of the box. In the second dataset (“Top view+occ"), we add a large moving object occluding 25% of the scene. Finally, we decrease the camera viewing angle to 45◦, 25◦ and 15◦ degrees, which results in an increasing amount of inter-object object occlusions due to perspective projection of the 3D scene onto a 2D image plane. We computed the proportion of time each object is occluded or partially occluded and found 3.1% in the top-view videos, 31.1% in the top view occluded videos, and 5.9%, 11.7%, 13.4% in the 45◦, 25◦, 15◦ tilted videos, respectively. Additional details of the datasets are given in the supplementary material. Inter-object occlusion investigation. In this section we consider prediction horizons of 5 and 10 frames, and evaluate the position error as a L2 distance between the predicted and target object positions. L2 distance is computed in the 3D Cartesian scene coordinates, such that results are comparable across different camera tilts. Results are shown in Table 1. We first note that our model trained on mask and depth prediction significantly outperforms the linear baseline, which is computed as an extrapolation of the position of objects based on their initial velocities. Moreover, the results of our method are relatively stable across challenging setups with occlusions by external objects or frequent self-occlusions in tilted views. This demonstrates the potential ability of our method to be trained from real videos where occlusions and other factors usually prevent reliable recovery of object states. Ablation Studies. As an ablation study we replace the Recurrent Interaction Network (RecIntNet) in our model with a multi-layer perceptron. This MLP contains four hidden layers of size 180 and is trained the same way as RecIntNet, modelling acceleration as described in equation 3.3. To deal with the varying number of objects in the dataset, we pad the inputs with zeros. We observe that RecIntNet allows more robust predictions through time. As a second ablation study, we train the Recurrent Interaction Network without modelling acceleration (3.3). This is similar to the model described in (Janner et al., 2018), where object representation is not decomposed into position / velocity / intrinsic properties, but is 1https://pypi.org/project/pybullet rather a (unstructured) 256-dimensional vector. We observe a significant loss in performance, tending to confirm that modelling position and velocity explicity, and having a constant velocity prior on motion (given by 3.3) improves future predictions. As a third ablation study, we train a deterministic variant of RecIntNet, where only the sequence of states is predicted, without the uncertainty term τ . The loss considered is the mean squared error between the predicted and the observation state. Observed results are slightly worse than our model handling uncertainty (see NoProba-RIN), but close enough to say that this is not a key feature for modelling 5 or 10 frames in the future. In qualitative experiments, however, we observed more robust long-term predictions after introducing the uncertainty term τ in the model and the loss (equation 3). For the purpose of comparison, we also evaluate three models trained using ground truth object states. Our Recurrent Interaction Network trained on ground truth object states gives similar results to the model of (Battaglia et al., 2016). As expected, training on ground truth states (effectively ignoring occlusions and other effects) performs better than training from object masks and depth. We also compare with CNN autoencoder (Riochet et al., 2018), showing our models gives better forward mask and depth predictions than CNN auto-encoders trained end-to-end. Full results are given in the supplementary material. Generalization to real scenes. We construct a dataset of 22 real videos, containing a variable number of colored balls and blocks in motion. Videos are recorded with a Microsoft kinect2 device, including RGB and depth frames. The setup is similar to the one generated with pybullet, recorded with a top camera view and containing 4 balls and a variable number of blocs (from 0 to 3). Here again, the borders of the frame coincide with the walls of the box. Taking as input object segmentation of the first two frames, we use our model to predict object trajectories through the whole video (see Figure 4). We use the model trained on top-view pybullet videos, without fine-tuning weights. We measure the error between predictions and ground truth positions along the rollout. Results are shown in Table 2 and clearly demonstrate that out approach outperforms the linear and MLP baselines. 4.2 Evaluating object following Evaluation on long roll-outs. We ran at test time longer roll-outs (up to 30 frames), iteratively corrected by our occlusion-aware refinement procedure. This can be viewed as a form of tracking evaluation. Table 3 shows the percentage of object predictions that diverge by more than an object diameter (20 pixels) using this method. The performance is very good, even for tilted views. Supplementary Figure 1 shows these numbers as a function of the pixel threshold. Roll-outs (without refinement) are provided in the following anonymous google drive (link), showing qualitatively convincing trajectories and bouncing behaviors, even for tilted views. Evaluation on IntPhys benchmark. Riochet et al. (2018) propose a benchmark to evaluate intuitive physics models. Drawing inspiration from infant development studies, the benchmark consists of classifying whether a particular video is physically possible or impossible. We focus on the task O1 / Occluded / Dynamic_1, evaluating the notion of object permanence in presence of occlusions. From the provided train set 2 containing only possible videos, we train our model to predict a sequence of 8 frames from an input pair of frames. The evaluation subset O1 / Occluded / Dynamic_1 contains on 720 videos, forming 180 quadruplet of (2 possible / 2 impossible) videos. Starting from the first visible position of an object, we predict its trajectory until the end of the video, refining prediction at every time step. For each video, the predicted masks are compared with the observed masks, resulting in a sequence of reconstruction errors. We derive an implausibility score for a video as the maximum error through the whole sequence. For each quadruplet of (2 possible / 2 impossible) videos, we classify the two videos that have the highest implausibility score as impossible, the two other as possible. Table 4 reports error rates, in comparison with baselines from Riochet et al. (2018). We can see a clear improvement of our method, confirming it can follow objects through long occlusions. 5 Discussion Learning the physics of simple macroscopic object dynamics and interactions is a relatively easy task when ground truth coordinates are provided to the system, and techniques like Interaction Networks trained with a future frame prediction loss are quite successful (Battaglia et al., 2016). Of course a major drawback of this kind of system is that it is basically restricted 2www.intphys.com to learning physics from 3D simulators. In real life, the ground truth coordinates of each object are unknown, only projected 2D views are available. Interestingly, we found that projective geometry is not, in and of itself, a difficulty. Indeed, when an Interaction Network is fed, not with 3D Cartesian object coordinates, but with a 2.5D projective referential such as the xy position of objects in a retina (plus depth), the accuracy of the prediction remains unchanged compared with the Cartesian ground truth. As RGBD videos are relatively easy to collect in large quantities, it would not be difficult to train systems with such inputs. But real world videos raise two other major difficulties: (i) images are not easily segmentable into objects, and (ii) objects do not remain always visible and tend to be occluded by other objects. This makes the ground truth coordinates of objects only partially observable. Here, we provided a first step towards more realistic physics learning by addressing the occlusion problem. We introduce a physics learning system composed of an Interaction Network followed by a trainable Renderer. The Interaction Network has been made recurrent, such that ground truth positions and velocities have only to be fed at the first frame. The renderer can be qualified as 2.5D, in that it takes as input the positions and velocities of objects (in retina pixel xy-d coordinates) and computes the resulting instance masks (plus depth). The 2.5D renderer is itself relatively lightweight (only 1233 parameters). It is based on a rather simple convolutional architecture, uses position fields, and can be trained with few examples to render objects of arbitrary shapes with controlable 2.5D positions respecting occlusions. The outcome can be seen on the rendering of tilted views as shown in the videos provided in the anonymous google drive (link). What we showed is that instead of training the interaction network to predict ground truth positions, we can directly train it through estimates obtained from mask+depth, corrected through the renderer. The resulting system, of course, produces less accurate predictions that when trained with real positions, but is still better than either linear baselines, or CNN mask prediction networks by (Riochet et al., 2018; Lerer et al., 2016). Interestingly, the reconstruction loss is still effective even in the presence of external occluders, or when objects occlude each other because of a tilted view during training. This can be explained by the fact that when an object is occluded, the gradient of the reconstruction loss will be zero (because no matter where the object is predicted to be, so long as it is predicted to be behind another object, it is not visible, hence contributes to no loss). This amounts to simply reducing the size of the training set, so it only slightly degrades the final performance. Importantly, this cancellation of the losses occurs without explicitly telling the system which objects are occluded and which are not. This is implicitly learnt by the system through the rendering network. Applying this method to the intuitive physics benchmark presented in (Riochet et al., 2018), we show it outperforms baselines on modelling object dynamics under occlusions. Further work needs to be done to fully train this system end-to-end, in particular, by learning the renderer and the interaction network jointly. Another avenues relate to the first problem raised above, i.e. the segmentation problem. Object segmentation based on raw pixels has been addressed in previous work, but yields errors (over- or under-segmentations) on more realistic datasets, which could have dramatic effect on the interaction network, which crucially depends on reliable object identification. Such issues need to be addressed before end-to-end physics prediction systems can be trained and used with real videos, and approximate the ability of infants to predict the interactions of objects from live or video scenes. S1. Description of supplementary videos The videos are in the anonymous google drive: https://drive.google.com/drive/ folders/111XK6GZnmHjd_7US6LGkxg6cAhJ2WDxQ?usp=sharing in the videos/ subdirectory. See also the README slideshow. • scene_overview.mp4 shows raw videos of the entire environment. • tracking_occlusions_*.mp4 show examples of position prediction through complete occlusions, using our occlusion-aware object position refinement. This shows that our model can keep track of the object identity through complete occlusions, mimicking “object permanence". • one_class*.mp4 show different examples of our model following motion of multiple objects in the scene. All balls have the same color which makes them difficult to follow in case of mutual interactions. Videos come from tilted 25◦ experiments, which are the most challenging because they include inter-object occlusions. Dots represent the predicted position of each object, the color being its identity. Our model shows very good predictions with small colored markers (dots) well centered in the middle of each object, with marker color remaining constant for each object preserving the object identity during occlusions and collisions. one_class_raw*.mp4 show rendered original views of the same dynamic scenes but imaged from a different viewpoint for better understanding. • rollout_0.mp4, rollout_1.mp4 show three different rollouts without position refinement. From left to right: ground truth trajectories, our model trained of state, our model trained on masks, our model trained on masks with occlusions during training. Rollout length is 20 frames. • rollout_tilt*_model.mp4 and rollout_tilt*_groundtruth.mp4 show the same dynamic scene but observed with various camera tilts (e.g. tilt45_model.mp4 show a video for a camera tilt of 45 degrees). *_model.mp4 are rollouts of our Recurrent Interaction Network (RecIntNet) computed without the occlusion-aware position refinement based on the observed masks (pure forward prediction of the dynamics model). *_groundtruth.mp4 are the corresponding ground-truth trajectories, rendered with the Compositional Rendering Network. • intphys_*.mp4 show object following in the IntPhys training set. • rollout_pybullet_*.mp4 show free rollout (no refinement) on synthetic dataset. • rollout_real_*.mp4 show generalization to real scenes. S2. Datasets To validate our model, we use pybullet1 physics simulator to generate videos of variable number of balls of different colors and sizes bouncing in a 3D scene (a large box with solid walls) containing a variable number of smaller static 3D boxes. We generate five dataset versions, where we vary the camera tilt and the presence of occluders. All experiments are made with datasets of 12,000 videos of 30 frames (with a frame rate of 20 frames per second). For each dataset, we keep 2,000 videos separate to pre-train the renderer, 9, 000 videos to train the physics predictor and 1, 000 videos for evaluation. Our scene contains a variable number of balls (up to 6) with random initial positions and velocities, bouncing against each other and the walls. Initial positions are sampled from a uniform distribution in the box [1, 200]2, all balls lying on the ground. Initial velocities along x and y axes are sampled in Unif([−25, 25]) units per frame, initial velocity along z-axis is set to 0. The radius of each ball is sampled uniformly in [10, 40]. Scenes also contain a variable number of boxes (up to 2) fixed to the floor, against which balls can collide. Contrary to Battaglia et al. (2016) where authors set a frame rate of 1000 frames per second, we sample 30 frames per second, which is more reasonable when working with masks (because of the computation cost of mask prediction). Top-view. In the first dataset we record videos with a top camera view, where the borders of the frame coincide with the walls of the box. Here, initial motion is orthogonal to the camera, which makes this dataset very similar to the 2D bouncing balls datasets presented in Battaglia et al. (2016) and Watters et al. (2017). However, our dataset is 3D and because of collisions and the fact that the balls have different sizes, balls can jump on top of each other, making occlusions possible, even if not frequent. Top-view with Occlusions. To test the ability of our method to learn object dynamics in environments where occlusions occur frequently, we record the second dataset including frequent occlusions. We add an occluder to the scene, which is an object of irregular shape (an airplane), occluding 25% of the frame and moving in 3D between the balls and the camera. This occluder has a rectilinear motion and goes from the bottom to the top of the frame during the whole video sequence. Sample frames and rendered predictions can be found in the supplementary material. Tilted-views. In three additional datasets we keep the same objects and motions but tilt the camera with angles of 45◦, 65◦ and 75◦ degrees. Increasing the tilt of the camera results in more severe inter-object occlusions (both partial and complete) where the balls pass in front of each other, and in front and behind the static boxes, at different distances to the camera. In addition, the ball trajectories are becoming more complex due to increasing perspective effects. In contrary to the top-view experiment, the motion is not orthogonal to the camera plane anymore, and depth becomes crucial to predict the future motion. S3. Training details This section gives details of the offline Pre-Training of the compositional Rendering Network and detailed outline of the algorithm for training the Recurrent Interaction Network. Pre-Training the Compositional Rendering Network. We train the neural renderer to predict mask and depth M̂t, D̂t from a list of objects [px, py, d, c] where px, py are x-y coordinates of the object in the frame, d is the distance between the object and the camera and c is a vector for intrinsic object properties containing the size of the object, its class (in our experiments a binary variable for whether the object is a ball, a square or an occluder) and its color as vector in [0, 1]3. The target mask is a 128 × 128 image where each pixel value indicates the index of the corresponding object mask (0 for the background, i ∈ 1..N for objects). The loss on the 1https://pypi.org/project/pybullet mask is negative log-likelihood, which corresponds to the average classification loss on each pixel Lmask(M̂,M) = ∑ i≤h,j≤w ∑ n≤N 1(Mi,j = n)log(M̂i,j,n), (1) where the first sum is over individual pixels indexed by i and j, the second sum is over the individual objects indexed by n, ∀M̂ ∈ [0, 1]h×w×N are the predicted (soft-) object masks, and ∀M ∈ [[1, N []h×w is the scene ground truth mask containing all objects. The target depth map is a 128× 128 image with values being normalized to the [-1,1] interval during training. The loss on the depth map prediction is the mean squared error Ldepth(D̂,D) = ∑ i≤h,j≤w (D̂i,j −Di,j)2, (2) where ∀D̂ and D ∈ Rh×w are the predicted and ground truth depth maps, respectively. The final loss used to train the renderer is the weighted sum of losses on masks and depth maps, L = 0.7 ∗ Lmask + 0.3 ∗ Ldepth. We use the Adam optimizer with default parameters, and reduce learning rate by a factor 10 each time the loss on the validation set does not decrease during 10 epochs. We pre-train the network on a separate set of 15000 images generated with pybullet and containing similar objects as in our videos. Training details of the Recurrent Interaction Network. The detailed outline of training the Recurrent Interaction Network is given in Algorithm 1. Algorithm 1: Train Recurrent Interaction Network Data: T, L: length of the video and prediction span, respectively; Mt, t = 1..T : Instance masks; Dt, t = 1..T : Depth maps; Rend: Pre-trained Renderer; RIN: Recurrent Interaction Network (initialized with constant velocity motion); Criterion: stopping criterion (RIN loss on validation); Detection(mt,dt): returns centroid, depth and size of instance masks; NLL, MSE: Negative Log-Likelihood and Mean-Squarred Error, respectively; Result: Trajectory estimates p̄t+1..t+L; Trained Recurrent Interaction Network wRIN; while Criterion(RIN) do for t ∈ {1..T − 1} do // Initialization of positions and velocities // Initial object positions from observed masks and depths p̂t ← Detection(mt, dt); // Occlusion-aware object position refinement using Renderer p̄t ← arg minp←p̂t NLL(Rend(p), (mt, dt)); // Estimate object velocities from consecutive frames v̄t ← p̄t+1 − p̄t ; for t ∈ {1..T − L} do // Training Recurrent Interaction Network // Predict sequence of states (of all objects) using roll-out p̂t+1..t+L ← RIN(p̄t, v̄t); // Occlusion-aware object position refinement p̄t+1..t+L ← arg minp←p̂t+1..t+L NLL(Rend(p), (mt, dt)); // Update weights of Recurrent Interaction Network wRIN ← arg minw MSE(RIN(p̄t), p̂t+1..t+L); Given an initial state st, the Recurrent Interaction Network recursively predicts a sequence of future states ŝt+1, ŝt+2, ..., ŝt+L, as well as error terms τ̂t+1, τ̂t+2, ..., τ̂t+L. This predicted sequence is compared to object positions (ground truth or derived from masks after refinement), and the loss is computed as the sum of negative log likelihood (??) along the sequence. Top view Top view+ 45◦ tilt 25◦ tilt 15◦ tilt occlusion CNN autoencoder Riochet et al. (2018) 0.0147 0.0451 0.0125 0.0124 0.0121 RIN, trained on mask+depth 0.0101 0.0342 0.0072 0.0070 0.0069 Proba-RIN, trained on mask+depth 0.0100 0.0351 0.0069 0.0071 0.0065 Table S1. Aggregate pixel reconstruction error for mask and depth, for a prediction span of two frames. This error is the loss used for training (described in the supplementary material). It is a weighted combination of mask error (per-pixel classification error) and the depth error (mean squared error). We use Adam optimizer, and divide the learning rate by L to be consistent with the size of the sequence (as the loss is a sum over a sequence of length L). The same learning rate decay and stopping procedure is applied. Sequence lengths of 4, 6 and 10 were tested during training, lengths of 10 giving slightly more stable rollouts. S4. Occlusion-aware refinement of object positions Position refinement consists of using the pre-trained Renderer to correct estimated positions of all objects in a particular frame. To do so, we give the position estimates as input to the Renderer which outputs a corresponding pair of mask and depth field for the frame, (M̂, D̂), properly rendering the inter-object occlusions. This prediction is compared to the observed mask and depth field, returning errors that are backpropagated through the frozen weights of the Renderer. We perform gradient descent on the input itself to correct object position and size estimates, according to the observations. In our experiments, we set learning rate to 0.01 and compute 200 iterations of gradient descent. Details of the loss are given in the supplementary material. For object positions estimated from object masks, this refinement allows us to reduce errors due to partial occlusions (moving the predicted center of one object from its visible mask centroid to its real center). S4. Future prediction: Comparison with Riochet et al. (2018) We evaluate the error of the mask and depth prediction, measured by the training error described in detail in . Here, we compare our model to a CNN autoencoder Riochet et al. (2018), which directly predicts future masks from current ones, without explicitly modelling dynamics of the individual objects in the scene. Note this baseline is similar to Lerer et al. (2016). Results are shown in Table S1. As before, the existence of external occluders or the presence of tilt degrades the performance, but even in this case, our model remains much better than the CNN autoencoder of Riochet et al. (2018). S5. Detailed roll-out results In Figure S1, we report the proportion of correctly followed objects for different rollout lengths (5, 10 and 30 frames) as a function of the distance error (pixels). Note that the size of the smallest object is around 20 pixels.
1. What is the main contribution of the paper regarding object permanence, tracking, and future prediction? 2. What are the strengths of the proposed approach, particularly in combining a recurrent neural network and a rendered network? 3. Do you have any concerns or questions regarding the training process of the model? 4. How does the reviewer assess the difference between the proposed method and prior works, such as Battaglia et al. (2016)? 5. What are the limitations of the proposed approach, especially in comparison with other works like Riochet et al. (2018)? 6. Can you provide more explanations or clarifications regarding some of the terms used in the review, such as "compositional rendering network," "implausibility score," and "maximum error through the whole sequence"?
Review
Review Summary: This paper proposes a method that combines a recurrent neural network that predicts values that are used as inputs to a rendered which interprets them and generates an object shape map and a depth map for every step of the dynamics predicted by the recurrent neural network. The proposed method is able to handle object occlusions and interactions. In experiments, the authors show improved performance against baselines for future prediction, object tracking, and object permanence. Pros: + Rendering network used with RNN + Outperforms chosen baselines Weaknesses / comments: - Compositional Rendering Network has to be pretrained: Did the authors try to train the model end-to-end? It would be interesting to see if this can be done so the proposed network is more unified. - Figure 3 is not self explanatory: It would be good if the authors add labels to the predicted and gt frames. It is not easy to parse this figure from just looking at it. - Difference from Battaglia et al., 2016: It seems that the only difference between the proposed method and this baseline is the change of input/outputs (including output with variance), and training in full sequence (RNN)? This looks like a minor change to me and reduces the novelty of the proposed method. - Table 1 (trained on ground-truth positions): The authors claim that their network performs similar to Battaglia et al., 2016, but it seems that the baseline is better than the proposed method for the short term predictions with a relative improvement of about 20% and for long term when the baseline is better (half of the tests) it’s by a relative improvement of about 10%. Can the authors comment on this? Am I missing something? - Implausibility score: What do the authors mean by “the maximum error through the whole sequence”? How is this defined? - The authors compare with Riochet et al., 2018 in Table 4, but not in the rest of the evaluations. Can the authors comment on why this is the case? Conclusion: The paper proposes an interesting method, dataset, and seems to perform baselines in the quantitative evaluation. To the best of my knowledge, the current state of the method is novel in the rendering network. However, the rest of components have limited novelty. In addition, I have some comments about the paper which stated above.
ICLR
Title The Emergence of Prototypicality: Unsupervised Feature Learning in Hyperbolic Space Abstract Prototypicality is extensively studied in machine learning and computer vision. However, there is still no widely accepted definition of prototypicality. In this paper, we first propose to define prototypicality based on the concept of congealing. Then, we develop a novel method called HACK to automatically discover prototypical examples from the dataset. HACK conducts unsupervised prototypicality learning in Hyperbolic space with sphere pACKing. HACK first generates uniformly packed particles in the Poincaré ball of hyperbolic space and then assigns the image uniquely to each particle. Due to the geometrical property of hyperbolic space, prototypical examples naturally emerge and tend to locate in the center of the Poincaré ball. HACK naturally leverages hyperbolic space to discover prototypical examples in a data-driven fashion. We verify the effectiveness of the method with synthetic dataset and natural image datasets. Extensive experiments show that HACK can naturally discover the prototypical examples without supervision. The discovered prototypical examples and atypical examples can be used to reduce sample complexity and increase model robustness. 1 INTRODUCTION Not all instances are created equal. Some instances are more representative of the class and some instances are outliers or anomalies. Representative examples can be viewed as prototypes and used for interpretable machine learning (Bien & Tibshirani, 2011), curriculum learning (Bengio et al., 2009) and learning better decision boundaries (Carlini et al., 2018). With prototypical examples, we can also conduct classification with few or even one example (Miller et al., 2000). Given an image dataset, thus it is desirable to organize the examples based on prototypicality. If the features of the images are given, it is relatively easy to find the prototypes by examining the density peaks of the feature distribution. If the features are not given, to discover prototypical examples without supervision is difficult: there is no universal definition or simple metric to assess the prototypicality of the examples. A naive method to address this problem is to examine the gradient magnitude (Carlini et al., 2018). However, this approach is shown to have a high variance which is resulted from different training setups (Carlini et al., 2018). Some methods address this problem from the perspective of adversarial robustness (Stock & Cisse, 2018; Carlini et al., 2018): prototypical examples should be more adversarially robust. However, the selection of the prototypical examples highly depends on the adversarial method and the metric used in adversarial attack. Several other methods exist for this problem but they are either based on heuristics or lack a proper justification (Carlini et al., 2018). In this paper, we first introduce a way of obtaining prototypical examples from image congealing (Miller et al., 2000). Congealing is the process of jointly aligning a set of images. The congealed images are transformed to better align with the average image and thus more typical. We further propose a novel method, called HACK, by leveraging the geometry of hyperbolic space for unsupervised learning. Hyperbolic space is non-Euclidean space with constant non-negative curvature Anderson (2006). Different from Euclidean space, hyperbolic space can represent hierarchical relation with low distortion. Poincaré ball model is one of the most commonly used models for hyperbolic space (Nickel & Kiela, 2017b). One notable property of Poincaré ball model is that the distance to the origin grows exponentially as we move towards the boundary. Thus, the points located in the center of the ball are close to all the other points while the points located close to the boundary are infinitely Figure 1: Different from the existing unsupervised learning methods which aim to group examples via semantic similarity, HACK organizes images in hyperbolic space in a hierarchical manner. The typical images are at the center of the Poincaré ball and the atypical images are close to the boundary of the Poincaré ball. far away from other points. With unsupervised learning in hyperbolic space, HACK can learn features which capture both visual similarity and prototypicality(Figure 1). HACK optimizes the organization of the dataset by assigning the images to a set of uniformly distributed particles in hyperbolic space. The assignment is done by minimizing the total hyperbolic distance between the image features and the particles via Hungarian algorithm. The prototypicality arises naturally based on the distance of the example to other examples. Prototypical examples tend to locate in the center of the Poincaré ball and atypical examples tend to locate close to the boundary. Hyperbolic space readily facilitates such an organization due to property of the hyperbolic distance. In summary, the contributions of the papers are, • We propose the first unsupervised feature learning method to learn features which capture both visual similarity and prototypicality. The positions of the features reflect prototypicality of the examples. • The proposed method HACK assigns images to particles that are uniformly packed in hyperbolic space. HACK fully exploits the property of hyperbolic space and prototypicality arises naturally. • We ground the concept of prototypicality based on congealing which conforms to human visual perception. The congealed examples can be used to replace the original examples for constructing datasets with known prototypicality. We validate the effectiveness of the method by using a synthetic data with natural and congealed images. We further apply the proposed method to commonly used image datasets to reveal prototypicality. • The discovered prototypical and atypical examples are shown to reduce sample complexity and increase robustness of the model. 2 RELATED WORK Prototypicality. The study of prototypical examples in machine learning has a long history. In Zhang (1992), the authors select typical instances based on the fact that typical instances should be representative of the cluster. In Kim et al. (2016), prototypical examples are defined as the examples that have minimum maximum mean discrepancy within the data. Li et al. (Li et al., 2018) propose to discover prototypical examples by architectural modifications: the dataset is first projected onto a low-dimensional manifold and a prototype layer is used to minimize the distance between inputs and the prototypes on the manifold. The robustness to adversarial attacks are also used as a criteria for prototypicality (Stock & Cisse, 2018). In Carlini et al. (2018), the authors propose multiple metrics for prototypicality discovery. For example, the features of prototypical examples should be consistent across different training setups. However, these metrics usually depend heavily on the training setups and hyperparameters used for training. The idea of prototypicality is also extensively studied in meta-learning for one-shot or few-shot classification (Snell et al., 2017). No existing works address the prototypicality discovery problem in a data-driven fashion. Our proposed HACK naturally exploits hyperbolic space to organize the images based on prototypicality. Unsupervised Learning in Hyperbolic Space. Learning features in hyperbolic space has shown to be useful for many machine learning problems (Nickel & Kiela, 2017a; Ganea et al., 2018). One useful property is that hierarchical relations can be embedded in hyperbolic space with low distortion (Nickel & Kiela, 2017a). A generalized version of the normal distribution called wrapped normal distribution is proposed for modeling distribution of points in hyperbolic space (Nagano et al., 2019). The proposed wrapped normal distribution is used as the latent space for constructing hyperbolic variational autoencoders (VAEs) (Kingma & Welling, 2013). Poincaré VAEs is constructed in Mathieu et al. (2019) with a similar idea to Nagano et al. (2019) by replacing the standard normal distribution with hyperbolic normal distribution. Unsupervised 3D segmentation (Hsu et al., 2020) and instance segmentation (Weng et al., 2021) are conducted in hyperbolic space via hierarchical hyperbolic triplet loss. CO-SNE (Guo et al., 2021a) is recently proposed to visualize high-dimensional hyperbolic features in a two-dimensional hyperbolic space. Although hyperbolic distance facilitates the learning of hierarchical structure, how to leverage hyperbolic space for unsupervised prototypicality discovery is not explored in the current literature. Sphere Packing. The problem of sphere packing is to pack a set of particles as densely as possible in a space (Conway & Sloane, 2013). Sphere packing can be served as a toy model for granular materials and has applications in information theory (Shannon, 2001) to find error-correcting codes (Cohn, 2016). Sphere packing is difficult due to multiple local minima, the curse of high-dimensionality and complicated geometrical configurations. Packing in hyperbolic space is also studied in the literature. It is given in Böröczky (1978) a universal upper bound for the density of sphere packing in an n-dimensional hyperbolic space when n ≥ 2. We are interested in generating uniform packing in a two-dimensional hyperbolic space. Uniformity has been shown to be a useful criterion for learning good features on the hypersphere (Wang & Isola, 2020). We opt to find the configuration with an optimization procedure which is easily applicable even with thousands of particles. 3 OVERVIEW Given existing features {f(vi)} which are obtained by applying a feature extractor for each instance vi, we can find the prototypical examples by examining the density peaks via techniques from density estimation. For example, the K-nearest neighbor density (K-NN) estimation (Fix & Hodges, 1989) is defined as, pknn(vi, k) = k n 1 Ad ·Dd(vi, vk(i)) (1) where d is the feature dimension, Ad = πd/2/Γ(d/2+1), Γ(x) is the Gamma function and k(i) is the kth nearest neighbor of example vi. The nearest neighbors can be found by computing the distance between the features. However, different training setups can induce different feature spaces, which in turn lead to different conclusions of prototypicality. Our goal is to learn features that naturally reflect prototypicality of the examples. We ground our concept of prototypicality based on congealing (Miller et al., 2000). In particular, we define prototypical examples in the pixel space by examining the distance of the images to the average image in the corresponding class. Our idea is based on a traditional computer vision technique called image alignment (Szeliski et al., 2007) which aims to find correspondences across images. During congealing (Miller et al., 2000), a set of images are transformed to be jointly aligned by minimizing the joint pixelwise entropies. The congealed images are more prototypical: they are better aligned with the average image. Thus, we have a simple way to transform an atypical example to a typical example (see Figure 2). This is useful since given an unlabeled image dataset the typicality of the examples are unknown, congealing examples can be naturally served as examples with known typicality and be used as a validation for the effectiveness of our method. 4 UNSUPERVISED FEATURE REPRESENTATION IN HYPERBOLIC SPACE We aim to develop a method which can automatically discover prototypical examples unsupervisedly. In particular, we conduct unsupervised learning in hyperbolic space with sphere packing (Figure 5). We specify where the targets should be located ahead of training with uniform packing in hyperbolic space, which by design are maximally evenly spread out in hyperbolic space. The uniformly distributed particles guide feature learning to achieve maximum instance discrimination (Wu et al., 2018). HACK figures out which instance should be mapped to which target through bipartite graph matching as a global optimization procedure. During training HACK minimizes the total hyperbolic distances between the mapped image point (in the feature space) and the target, those that are more typical naturally emerge closer to the origin of Poincaré ball. Prototypicality comes for free as a result of self-organization. HACK differs from the existing learning methods in several aspects (Figure 3). Different from supervised learning, HACK allows the image to be assigned to any target (particle). This enables exploration of natural organizations of the data. Different from existing unsupervised learning learning method, HACK specifies a predefinted geometrical organization which encourages the corresponding structure to be emerged from the dataset. Existing methods are not applicable for prototypicality discovery without supervision due to their aforementioned limitations. Section 4.1 gives the background on hyperbolic space. Section 4.2 describes the steps for generating uniformly distributed particles in hyperbolic space. Section 4.3 delineates the details of hyperbolic instance assignment via Hungarian algorithm. 4.1 POINCARÉ BALL MODEL FOR HYPERBOLIC SPACE Hyperbolic space. Euclidean space has a curvature of zero and a hyperbolic space is a Riemannian manifold with a constant negative curvature. Poincaré Ball Model for Hyperbolic Space. There are several isometrically equivalent models for visualizing hyperbolic space with Euclidean representation. The Poincaré ball model is the commonly used one in hyperbolic representation learning (Nickel & Kiela, 2017b). The n-dimensional Poincaré ball model is defined as (Bn, gx), where Bn = {x ∈ Rn : ∥x∥ < 1} and gx = (γx)2In is the Riemannian metric tensor. γx = 21−∥x∥2 is the conformal factor and In is the Euclidean metric tensor. Hyperbolic Distance. Given two points u ∈ Bn and v ∈ Bn, the hyperbolic distance is defined as, dBn(u,v) = arcosh ( 1 + 2 ∥u− v∥2 (1− ∥u∥2)(1− ∥v∥2) ) (2) where arcosh is the inverse hyperbolic cosine function and ∥·∥ is the usual Euclidean norm. Figure 4: The proposed repulsion loss is used to generate uniformly packed particles in hyperbolic space. (a) If the distance between two particles are within rn,r, minimizing the repulsion loss would push the two particles away. (b) The repulsion loss is larger when the two particles become closer. (a) (b) Hyperbolic distance has the unique property that it grows exponentially as we move towards the boundary of the Poincaré ball. In particular, the points on the circle represents points in the infinity. Hyperbolic space is naturally suitable for embedding hierarchical structure (Sarkar, 2011; Nickel & Kiela, 2017b) and can be regarded as a continuous representation of trees (Chami et al., 2020). The hyperbolic distance between samples implicitly reflects their hierarchical relation. Thus, by embedding images in hyperbolic space we can naturally organize images based on their semantic similarity and prototypicality. 4.2 SPHERE PACKING IN HYPERBOLIC SPACE Given n particles, our goal is to pack the particles into a two-dimensional hyperbolic space as densely as possible. We derive a simple repulsion loss function to encourage the particles to be equally distant from each other. The loss is derived via the following steps. First, we need to determine the radius of the Poincaré ball used for packing. We use a curvature of 1.0 so the radius of the Poincaré ball is 1.0. The whole Poincaré ball cannot be used for packing since the volume is infinite. We use r < 1 to denote the actual radius used for packing. Thus, our goal is to pack n particles in a compact subspace of Poincaré ball. Then, the Euclidean radius r is further converted into hyperbolic radius rB. Let s = 1√ c , where c is the curvature. The relation between r and rB is rB = s log s+rs−r . Next, the total hyperbolic area AB of a Poincaré ball of radius rB can be computed as AB = 4πs2 sinh2( rB2s ), where sinh is the hyperbolic sine function. Finally, the area per point An can be easily computed as ABn , where n is the total number of particles. Given An, the radius per point can be computed as rn = 2s sinh −1( √ An 4πs2 ). We use the following loss to generate uniform packing in hyperbolic space. Given two particles i and j, the repulsion loss V is defined as, V (i, j; k, n, r) = { 1 [2rn −max(0, 2rn − dB(i, j))]k − 1 (2rn)k } · C(k) (3) where C(k) = (2rn) k+1 k and k is a hyperparameter. Intuitively, if the particle i and the particle j are within 2rn, the repulsion loss is positive. Minimizing the repulsion loss would push the particle i and j away. If the repulsion is zero, this indicates all the particles are equally distant (Figure 4 a). Figure 4 b) shows that the repulsion loss grows significantly when the two particles become close. We also adopt the following boundary loss to prevent the particles from escaping the ball, B(i; r) = max(0, normi − r + margin) (4) where normi is the ℓ2 norm of the representation of the particle i. Figure 3 b) shows an example of the generated particles that are uniformly packed in hyperbolic space. 4.3 HYPERBOLIC INSTANCE ASSIGNMENT HACK learns the features by optimizing the assignments of the images to the particles (Figure 5). Once we generate a fixed set of uniformly packed particles in a two-dimensional hyperbolic space, our next goal is to assign each image to the corresponding particle. The assignment should be one-to-one, that is, each image should be assigned to one particle and each particle is allowed to be associated with only one image. We cast the instance assignment problem as a bipartite matching problem (Gibbons, 1985) and solve it Hungarian algorithm (Munkres, 1957). Figure 5: HACK conducts unsupervised learning in hyperbolic space with sphere packing. The images are mapped to particles by minimizing the total hyperbolic distance. HACK learns features that can capture both visual similarities and prototypicality. Algorithm 1 HACK: Unsupervised Learning in Hyperbolic Space. Require: # of images: n ≥ 0. Radius for packing: r < 1. An encoder with parameters θ: fθ 1: Generate uniformly distributed particles in hyperbolic space by minimizing the repulsion loss in Equation 3 2: Given {(x1, s1), (x2, s2), ..., (xb, sb)}, optimize fθ by minimizing the total hyperbolic distance via Hungarian algorithm. Initially, we randomly assign the particles to the images, thus there is a random one-to-one correspondence between the images to the particles (not optimized). Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle, and an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. We aim to find the minimum cost bipartite matching of the images to the particles within this batch. It is worth noting that no labels are needed and the assignment is done without supervision. In the bipartite matching, the cost is the hyperbolic distance of each image to the particle. Thus, the criterion is to minimize the total hyperbolic distances of the assignment. We achieve this goal with Hungarian algorithm Munkres (1957) which has a complexity of O(b3), where b is the batch size. It is worth noting that the assignment is only limited to the samples in the particular batch, thus the time and memory complexity is tolerable. The one-to-one correspondence between the images and particles are always maintained during training. The details of HACK is shown in Algorithm 1. Due to the property of hyperbolic distance, the images that are more typical tend to be assigned to the particles located in the center of the Poincaré ball. Thus, HACK implicitly defines prototypicality as the distance of the sample to all the other samples. The prototypicality of the images can be easily reflected by the location of the assigned particles. Moreover, similar images tend to cluster together due to semantic similarity. In summary, with hyperbolic instance assignment, HACK automatically organizes images based on prototypicality by exploiting hyperbolicity of the space. Why Does HACK Work? Hyperbolic space can embed tree structure with no distortion. In particular, the root of the tree can be embedded in the center of of the Poincaré ball and the leaves are embedded close to the boundary. Thus, the root is close to all the other nodes. This agrees with our intuition that typical examples should be close to all other examples. By minimizing the total assignment loss of the images to the particles, we seek to organize the images implicitly in a tree-structure manner. Consider three images A, B, C for an example. Assume image A is the most typical image. Thus the feature of A is close to both the features of B and C. The bipartite matching tends to assign image A to the particle in the center since this naturally reflects the feature distances between the three images. Connection to Existing Methods. Existing works address the problem of prototypicality discovery with ad-hoc defined metrics (Carlini et al., 2018). These metrics usually have high-variances due to different training setups or hyperparameters. In this paper, we take a different perspective by exploiting the natural organization of the data by optimizing hyperbolic instance assignment. The property of hyperbolic space facilitates discovery of prototypicality. Also, popular contrastive learning based unsupervised learning methods such as SimCLR (Chen et al., 2020) and MoCo (He et al., 2020) cannot achieve this goal since the predefined structure is not specified. 5 EXPERIMENTS We design several experiments to show the effectiveness of HACK for semantic and prototypical organization. First, we first construct a dataset with known prototypicality using the congealing algorithm (Miller et al., 2000). Then, we apply HACK to datasets with unknown prototypicality to organize the samples based on the semantic and prototypical structure. Finally, we show that the prototypical structure can be used to reduce sample complexity and increase model robustness. 5.1 DATASETS We first construct a dataset called Congealed MNIST. To verify the efficacy of HACK for unsupervised prototypicality discovery, we need a benchmark with known prototypical examples. However, currently there is no standard benchmark for this purpose. To construct the benchmark, we use the congealing algorithm from Miller et al. (2000) to align the images in each class of MNIST (LeCun, 1998). The congealing algorithm is initially used for one-shot classification. During congealing, the images are brought into correspondence with each other jointly. The congealed images are more prototypical: they are better aligned with the average image. In Figure 2, we show the original images and the images after congealing. The original images are transformed via affine transformation to better align with each other. The synthetic data is generated by replacing 500 original images with the corresponding congealed images. In Section E of the Appendix, we show the results of changing the number of replaced original images. We expect HACK to discover the congealed images and place them in the center of the Poincaré ball. We also aim to discover the prototypical examples from each class of the standard MNIST dataset (LeCun, 1998) and CIFAR10 (Krizhevsky et al., 2009). CIFAR10 consists of 60000 from 10 object categories ranging from airplane to truck. CIFAR10 is more challenging than MNIST since it has larger intra-class variations. 5.2 BASELINES We consider several existing metrics proposed in Carlini et al. (2018) for prototypicality discovery, the details can be found in Section C of the Appendix. Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. 5.3 IMPLEMENTATION DETAILS We implement HACK in Pytorch and the code will be made public. To generate the uniform particles, we first randomly initialize the particles. We run the training for 1000 epochs to minimize the repulsion loss and boundary loss. The learning rate is 0.01. The curvature of the Poincaré ball is 1.0 and the r is 0.76 which is used to alleviate the numerical issues (Guo et al., 2021b). The hyperparameter k is 1.55 which is shown to generate uniform particles well. For the assignment, we use a LeNet (LeCun et al., 1998) for MNIST and a ResNet20 (He et al., 2016) for CIFAR10 as the encoder. We apply HACK to each class separately. We attach a fully connected layer to project the feature into a two-dimensional Euclidean space. The image features are further projected onto hyperbolic space via an exponential map. We run the training for 200 epochs and the initial learning rate is 0.1. We use a cosine learning rate scheduler (Loshchilov & Hutter, 2016). We optimize the assignment every other epoch. All the experiments are run on a NVIDIA TITAN RTX GPU. 5.4 PROTOTYPICALITY DISCOVERY ON CONGEALED MNIST Figure 6 shows that HACK can discover the congealed images from all the images. In Figure 6 a), the red particles denote the congealed images and cyan particles denote the original images. We can observe that the congealed images are assigned to the particles that locate in the center of the Poincaré ball. This verifies that HACK can indeed discover prototypical examples from the original dataset. Section G.1 in the Appendix shows that during training the features of atypical examples gradually move to the boundary of the Poincaré ball. In Figure 6 b), we show the actual images that are embedded in the two-dimensional hyperbolic space. We can observe that the images in the center of Poincaré ball are more prototypical and images close to the boundary are more atypical. Also, the images are naturally organized by their semantic similarity. Figure 7 shows that the features of the original images become closer to the center of Poincaré ball after congealing. In summary, HACK can discover prototypicality and also organizes the images based on their semantics. To the best of our knowledge, this is the first unsupervised learning method that can be used to discover prototypical examples in a data-driven fashion. 5.5 RESULTS ON STANDARD BENCHMARKS Figure 8 shows the embedding of class 0 from MNIST and class “airplane” from CIFAR10 in the hyperbolic space. We sample 2000 images from MNIST and CIFAR10 for better visualization. We also show the arrangement of the images angularly with different angles. Radially, we can observe that images are arranged based on prototypicality. The prototypical images tend to locate in the center of the Poincaré ball. Especially for CIFAR10, the images become blurry and even unrecognizable as we move towards the boundary of the ball. Angularly, the images are arranged based on visual similarity. The visual similarity of images has a smooth transition as we move around angularly. Please see Section D for more results. Comparison with Baselines Figure 11 shows the comparison of the baselines with HACK. We can observe that both HACK and Model Confidence (MC) can discover typical and atypical images. Compared with MC, HACK defines prototypicality as the distance of the sample to other samples which is more aligned with human intuition. Moreover, in addition to prototypicality, HACK can also be used to organize examples by semantic similarities. Holdout Retraining (HR) is not effective for prototypicality discovery due to the randomness of model training. 5.6 APPLICATION OF PROTOTYPICALITY Reducing Sample Complexity. The proposed HACK can discover prototypical images as well as atypical images. We show that with atypical images we can reduce the sample complexity for training the model. Prototypical images are representative of the dataset but lack variations. Atypical examples contain more variations and it is intuitive that models trained on atypical examples should generalize better to the test samples. To verify this hypothesis, we select a subset of samples based on the norm of the features which indicates prototypicality of the examples. We consider using both the most typical and atypical examples for training the model. We train a LeNet on MNIST for 10 epochs with a learning rate of 0.1. Figure 9 a) shows that training with atypical images can achieve much higher accuracy than training with typical images. In particular, training with the most atypical 10% of the images achieves 16.54% higher accuracy than with the most typical 10% of the images. Thus, HACK provides an easy solution to reduce sample complexity. The results further verify that HACK can distinguish between prototypical and atypical examples. Increasing Model Robustness. Training models with atypical examples can lead to vulnerable model to adversarial attacks (Liu et al., 2018; Carlini et al., 2018). Intuitively, atypical examples lead to less smooth decision boundary and a small perturbation to the example is likely to change the prediction. With HACK, we can easily identify atypical samples to improve the robustness of the model. We use MNIST as the benchmark and use FGSM (Goodfellow et al., 2014) to attack the model with an ϵ = 0.07. We identify the atypical examples with HACK and remove the most atypical X% of the examples. Figure 9 b) shows that discarding atypically examples greatly improve the robustness of the model: the adversarial accuracy is improved from 84.72% to 93.42% by discarding the most atypical 1% of the examples. It is worth noting that the clean accuracy remains the same after removing a small number of atypical examples. 6 SUMMARY We propose an unsupervised learning method, called HACK, for organizing images with sphere packing in hyperbolic space. HACK optimizes the assignments of the images to a fixed set of uniformly distributed particles. Prototypical and semantic structures emerge naturally due to the property of hyperbolic distance. We apply HACK to synthetic data with known prototypicality and standard image datasets. The discovered prototypicality and atypical examples can be used to reduce sample complexity and increase model robustness. A APPENDIX B MORE DETAILS ON HYPERBOLIC INSTANCE ASSIGNMENT A more detailed description of the hyperbolic instance assignment is given. Initially, we randomly assign the particles to the images. Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle. Given an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. we aim to find the minimum cost bipartite matching of the images to the particles. The cost to minimize is the total hyperbolic distance of the hyperbolic features to the particles. We first compute all the pairwise distances between the hyperbolic features and the particles. This is the cost matrix of the bipartite graph. Then we use Hungarian algorithm to optimize the assignment (Figure 12). Suppose we train the encoder fθ for T epochs. We run the hyperbolic instance assignment every other epoch to avoid instability during training. We optimize the encoder fθ to minimize the hyperbolic distance of the hyperbolic feature to the assigned particle in each batch. C DETAILS OF BASELINES Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. In Holdout Retraining, multiple models are trained on the same dataset. The distances of the features of the images obtained from different models are computed and ranked. The prototypical examples are those examples with closest feature distance. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. Once we train a model on the dataset, we use the confidence of the model to rank the examples. The prototypical examples are those examples that the model is most D MORE RESULTS ON PROTOTYPICALITY DISCOVERY We show the visualization of all the images in Figure 17 and Figure 18. The images are organized naturally based their prototypicality and semantic similarity. We further conduct retrieval based on the norm of the hyperbolic features to extract the most typical and atypical images on CIAFR10 in Figure 19. The hyperbolic features with large norms correspond to atypical images and the hyperbolic features with small norms correspond to typical images. It can be observed that the object in the atypical images are not visible. E GRADUALLY ADDING MORE CONGEALED IMAGES We gradually increase the number of original images replaced by congealed images from 100 to 500. Still, as shown in Figure 13, HACK can learn representation that capture the concept of prototypicality regardless of the number of congealed images. This again confirms that the effectiveness of HACK for discovering prototypicality. 100 200 300 400 500 F DIFFERENT RANDOM SEEDS We further run the assignment for 5 times with different random seeds. The results are shown in Figure 14. We observe that the algorithm does not suffer from high variance and the congealed images are always assigned to the particles in the center of the Poincaré ball. This further confirms the efficacy of the proposed method for discovering prototypicality. G EMERGENCE OF PROTOTYPICALITY IN THE FEATURE SPACE Existing unsupervised learning methods mainly focus on learning features for differentiating different classes or samples Wu et al. (2018); He et al. (2020); Chen et al. (2020). The learned representations are transferred to various downstream tasks such as segmentation and detection. In contrast, the features learned by HACK aim at capturing prototypicality within a single class. To investigate the effectiveness of HACK for revealing prototypicality, we can include or exclude congealed images in the training process. When the congealed images are included in the training process, we expect the congealed images to be located in the center of the Poincaré ball while the original images to be located near the boundary of the Poincaré ball. When the congealed images are excluded from the training process, we expect the features of congealed images produced via the trained network are located in the center of the Poincaré ball. G.1 TRAINING WITH CONGEALED IMAGES AND ORIGINAL IMAGES We follow the same setups as in the Section 4.3.1 of the main text. Figure 15 shows the hyperbolic features of the congealed images and original images in different training epochs. The features of the congealed images stay in the center of the Poincaré ball while the features of the original images gradually expand to the boundary. G.2 TRAINING ONLY WITH ORIGINAL IMAGES Figure 16 shows the hyperbolic features of the congealed images when the model is trained only with original images. As we have shown before, congealed images are naturally more typical than their corresponding original images since they are aligned with the average image. The features of congealed images are all located close to the center of the Poincaré ball. This demonstrate that prototypicality naturally emerge in the feature space. Without using congealed images during training, we exclude any artifacts and further confirm the effectiveness of HACK for discovering prototypicality. We also observe that the features produced by HACK also capture the fine-grained similarities among the congealing images despite the fact that all the images are aligned with the average image. H DISCUSSIONS ON SOCIETAL IMPACT AND LIMITATIONS. We address the problem of unsupervised learning in hyperbolic space. We believe the proposed HACK should not raise any ethical considerations. We discuss current limitations below, Applying to the Whole Dataset Currently, HACK is applied to each class separately. Thus, it would be interesting to apply HACK to all the classes at once without supervision. This is much more challenging since we need to differentiate between examples from different classes as well as the prototypical and semantic structure. Exploring other Geometrical Structures We consider uniform packing in hyperbolic space to organize the images. It is also possible to extend HACK by specifying other geometrical structures to encourage the corresponding organization to emerge from the dataset.
1. What is the main contribution of the paper regarding feature learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its use of hyperbolic space and prototype representation? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors provide the first unsupervised strategy for learning features that simultaneously capture visual similarity and prototypicality. This research also found that prototypical and atypical instances minimize sample complexity and boost the model's robustness. Strengths And Weaknesses It is very interesting to use the properties of hyperbolic space to mine prototypical examples. The point of this study is that prototypical examples should be located close to the origin or center. (1) But this view is questionable. In [1], the prototype is located on the ideal boundary of the Poincaré ball. (2)Beyond that, "Prototypicality comes for free as a result of ´ self-organization" is not very intuitive, and there is no clear explanation. (3) "Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality." However, the model is more confident when examples are far away from the center, according to previous studies [2][3]. [1] Ghadimi Atigh, Mina, Martin Keller-Ressel, and Pascal Mettes. "Hyperbolic busemann learning with ideal prototypes." Advances in Neural Information Processing Systems 34 (2021): 103-115. [2] Atigh, Mina Ghadimi, et al. "Hyperbolic Image Segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [3] Khrulkov, Valentin, et al. "Hyperbolic image embeddings." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. Clarity, Quality, Novelty And Reproducibility The paper is easy to follow
ICLR
Title The Emergence of Prototypicality: Unsupervised Feature Learning in Hyperbolic Space Abstract Prototypicality is extensively studied in machine learning and computer vision. However, there is still no widely accepted definition of prototypicality. In this paper, we first propose to define prototypicality based on the concept of congealing. Then, we develop a novel method called HACK to automatically discover prototypical examples from the dataset. HACK conducts unsupervised prototypicality learning in Hyperbolic space with sphere pACKing. HACK first generates uniformly packed particles in the Poincaré ball of hyperbolic space and then assigns the image uniquely to each particle. Due to the geometrical property of hyperbolic space, prototypical examples naturally emerge and tend to locate in the center of the Poincaré ball. HACK naturally leverages hyperbolic space to discover prototypical examples in a data-driven fashion. We verify the effectiveness of the method with synthetic dataset and natural image datasets. Extensive experiments show that HACK can naturally discover the prototypical examples without supervision. The discovered prototypical examples and atypical examples can be used to reduce sample complexity and increase model robustness. 1 INTRODUCTION Not all instances are created equal. Some instances are more representative of the class and some instances are outliers or anomalies. Representative examples can be viewed as prototypes and used for interpretable machine learning (Bien & Tibshirani, 2011), curriculum learning (Bengio et al., 2009) and learning better decision boundaries (Carlini et al., 2018). With prototypical examples, we can also conduct classification with few or even one example (Miller et al., 2000). Given an image dataset, thus it is desirable to organize the examples based on prototypicality. If the features of the images are given, it is relatively easy to find the prototypes by examining the density peaks of the feature distribution. If the features are not given, to discover prototypical examples without supervision is difficult: there is no universal definition or simple metric to assess the prototypicality of the examples. A naive method to address this problem is to examine the gradient magnitude (Carlini et al., 2018). However, this approach is shown to have a high variance which is resulted from different training setups (Carlini et al., 2018). Some methods address this problem from the perspective of adversarial robustness (Stock & Cisse, 2018; Carlini et al., 2018): prototypical examples should be more adversarially robust. However, the selection of the prototypical examples highly depends on the adversarial method and the metric used in adversarial attack. Several other methods exist for this problem but they are either based on heuristics or lack a proper justification (Carlini et al., 2018). In this paper, we first introduce a way of obtaining prototypical examples from image congealing (Miller et al., 2000). Congealing is the process of jointly aligning a set of images. The congealed images are transformed to better align with the average image and thus more typical. We further propose a novel method, called HACK, by leveraging the geometry of hyperbolic space for unsupervised learning. Hyperbolic space is non-Euclidean space with constant non-negative curvature Anderson (2006). Different from Euclidean space, hyperbolic space can represent hierarchical relation with low distortion. Poincaré ball model is one of the most commonly used models for hyperbolic space (Nickel & Kiela, 2017b). One notable property of Poincaré ball model is that the distance to the origin grows exponentially as we move towards the boundary. Thus, the points located in the center of the ball are close to all the other points while the points located close to the boundary are infinitely Figure 1: Different from the existing unsupervised learning methods which aim to group examples via semantic similarity, HACK organizes images in hyperbolic space in a hierarchical manner. The typical images are at the center of the Poincaré ball and the atypical images are close to the boundary of the Poincaré ball. far away from other points. With unsupervised learning in hyperbolic space, HACK can learn features which capture both visual similarity and prototypicality(Figure 1). HACK optimizes the organization of the dataset by assigning the images to a set of uniformly distributed particles in hyperbolic space. The assignment is done by minimizing the total hyperbolic distance between the image features and the particles via Hungarian algorithm. The prototypicality arises naturally based on the distance of the example to other examples. Prototypical examples tend to locate in the center of the Poincaré ball and atypical examples tend to locate close to the boundary. Hyperbolic space readily facilitates such an organization due to property of the hyperbolic distance. In summary, the contributions of the papers are, • We propose the first unsupervised feature learning method to learn features which capture both visual similarity and prototypicality. The positions of the features reflect prototypicality of the examples. • The proposed method HACK assigns images to particles that are uniformly packed in hyperbolic space. HACK fully exploits the property of hyperbolic space and prototypicality arises naturally. • We ground the concept of prototypicality based on congealing which conforms to human visual perception. The congealed examples can be used to replace the original examples for constructing datasets with known prototypicality. We validate the effectiveness of the method by using a synthetic data with natural and congealed images. We further apply the proposed method to commonly used image datasets to reveal prototypicality. • The discovered prototypical and atypical examples are shown to reduce sample complexity and increase robustness of the model. 2 RELATED WORK Prototypicality. The study of prototypical examples in machine learning has a long history. In Zhang (1992), the authors select typical instances based on the fact that typical instances should be representative of the cluster. In Kim et al. (2016), prototypical examples are defined as the examples that have minimum maximum mean discrepancy within the data. Li et al. (Li et al., 2018) propose to discover prototypical examples by architectural modifications: the dataset is first projected onto a low-dimensional manifold and a prototype layer is used to minimize the distance between inputs and the prototypes on the manifold. The robustness to adversarial attacks are also used as a criteria for prototypicality (Stock & Cisse, 2018). In Carlini et al. (2018), the authors propose multiple metrics for prototypicality discovery. For example, the features of prototypical examples should be consistent across different training setups. However, these metrics usually depend heavily on the training setups and hyperparameters used for training. The idea of prototypicality is also extensively studied in meta-learning for one-shot or few-shot classification (Snell et al., 2017). No existing works address the prototypicality discovery problem in a data-driven fashion. Our proposed HACK naturally exploits hyperbolic space to organize the images based on prototypicality. Unsupervised Learning in Hyperbolic Space. Learning features in hyperbolic space has shown to be useful for many machine learning problems (Nickel & Kiela, 2017a; Ganea et al., 2018). One useful property is that hierarchical relations can be embedded in hyperbolic space with low distortion (Nickel & Kiela, 2017a). A generalized version of the normal distribution called wrapped normal distribution is proposed for modeling distribution of points in hyperbolic space (Nagano et al., 2019). The proposed wrapped normal distribution is used as the latent space for constructing hyperbolic variational autoencoders (VAEs) (Kingma & Welling, 2013). Poincaré VAEs is constructed in Mathieu et al. (2019) with a similar idea to Nagano et al. (2019) by replacing the standard normal distribution with hyperbolic normal distribution. Unsupervised 3D segmentation (Hsu et al., 2020) and instance segmentation (Weng et al., 2021) are conducted in hyperbolic space via hierarchical hyperbolic triplet loss. CO-SNE (Guo et al., 2021a) is recently proposed to visualize high-dimensional hyperbolic features in a two-dimensional hyperbolic space. Although hyperbolic distance facilitates the learning of hierarchical structure, how to leverage hyperbolic space for unsupervised prototypicality discovery is not explored in the current literature. Sphere Packing. The problem of sphere packing is to pack a set of particles as densely as possible in a space (Conway & Sloane, 2013). Sphere packing can be served as a toy model for granular materials and has applications in information theory (Shannon, 2001) to find error-correcting codes (Cohn, 2016). Sphere packing is difficult due to multiple local minima, the curse of high-dimensionality and complicated geometrical configurations. Packing in hyperbolic space is also studied in the literature. It is given in Böröczky (1978) a universal upper bound for the density of sphere packing in an n-dimensional hyperbolic space when n ≥ 2. We are interested in generating uniform packing in a two-dimensional hyperbolic space. Uniformity has been shown to be a useful criterion for learning good features on the hypersphere (Wang & Isola, 2020). We opt to find the configuration with an optimization procedure which is easily applicable even with thousands of particles. 3 OVERVIEW Given existing features {f(vi)} which are obtained by applying a feature extractor for each instance vi, we can find the prototypical examples by examining the density peaks via techniques from density estimation. For example, the K-nearest neighbor density (K-NN) estimation (Fix & Hodges, 1989) is defined as, pknn(vi, k) = k n 1 Ad ·Dd(vi, vk(i)) (1) where d is the feature dimension, Ad = πd/2/Γ(d/2+1), Γ(x) is the Gamma function and k(i) is the kth nearest neighbor of example vi. The nearest neighbors can be found by computing the distance between the features. However, different training setups can induce different feature spaces, which in turn lead to different conclusions of prototypicality. Our goal is to learn features that naturally reflect prototypicality of the examples. We ground our concept of prototypicality based on congealing (Miller et al., 2000). In particular, we define prototypical examples in the pixel space by examining the distance of the images to the average image in the corresponding class. Our idea is based on a traditional computer vision technique called image alignment (Szeliski et al., 2007) which aims to find correspondences across images. During congealing (Miller et al., 2000), a set of images are transformed to be jointly aligned by minimizing the joint pixelwise entropies. The congealed images are more prototypical: they are better aligned with the average image. Thus, we have a simple way to transform an atypical example to a typical example (see Figure 2). This is useful since given an unlabeled image dataset the typicality of the examples are unknown, congealing examples can be naturally served as examples with known typicality and be used as a validation for the effectiveness of our method. 4 UNSUPERVISED FEATURE REPRESENTATION IN HYPERBOLIC SPACE We aim to develop a method which can automatically discover prototypical examples unsupervisedly. In particular, we conduct unsupervised learning in hyperbolic space with sphere packing (Figure 5). We specify where the targets should be located ahead of training with uniform packing in hyperbolic space, which by design are maximally evenly spread out in hyperbolic space. The uniformly distributed particles guide feature learning to achieve maximum instance discrimination (Wu et al., 2018). HACK figures out which instance should be mapped to which target through bipartite graph matching as a global optimization procedure. During training HACK minimizes the total hyperbolic distances between the mapped image point (in the feature space) and the target, those that are more typical naturally emerge closer to the origin of Poincaré ball. Prototypicality comes for free as a result of self-organization. HACK differs from the existing learning methods in several aspects (Figure 3). Different from supervised learning, HACK allows the image to be assigned to any target (particle). This enables exploration of natural organizations of the data. Different from existing unsupervised learning learning method, HACK specifies a predefinted geometrical organization which encourages the corresponding structure to be emerged from the dataset. Existing methods are not applicable for prototypicality discovery without supervision due to their aforementioned limitations. Section 4.1 gives the background on hyperbolic space. Section 4.2 describes the steps for generating uniformly distributed particles in hyperbolic space. Section 4.3 delineates the details of hyperbolic instance assignment via Hungarian algorithm. 4.1 POINCARÉ BALL MODEL FOR HYPERBOLIC SPACE Hyperbolic space. Euclidean space has a curvature of zero and a hyperbolic space is a Riemannian manifold with a constant negative curvature. Poincaré Ball Model for Hyperbolic Space. There are several isometrically equivalent models for visualizing hyperbolic space with Euclidean representation. The Poincaré ball model is the commonly used one in hyperbolic representation learning (Nickel & Kiela, 2017b). The n-dimensional Poincaré ball model is defined as (Bn, gx), where Bn = {x ∈ Rn : ∥x∥ < 1} and gx = (γx)2In is the Riemannian metric tensor. γx = 21−∥x∥2 is the conformal factor and In is the Euclidean metric tensor. Hyperbolic Distance. Given two points u ∈ Bn and v ∈ Bn, the hyperbolic distance is defined as, dBn(u,v) = arcosh ( 1 + 2 ∥u− v∥2 (1− ∥u∥2)(1− ∥v∥2) ) (2) where arcosh is the inverse hyperbolic cosine function and ∥·∥ is the usual Euclidean norm. Figure 4: The proposed repulsion loss is used to generate uniformly packed particles in hyperbolic space. (a) If the distance between two particles are within rn,r, minimizing the repulsion loss would push the two particles away. (b) The repulsion loss is larger when the two particles become closer. (a) (b) Hyperbolic distance has the unique property that it grows exponentially as we move towards the boundary of the Poincaré ball. In particular, the points on the circle represents points in the infinity. Hyperbolic space is naturally suitable for embedding hierarchical structure (Sarkar, 2011; Nickel & Kiela, 2017b) and can be regarded as a continuous representation of trees (Chami et al., 2020). The hyperbolic distance between samples implicitly reflects their hierarchical relation. Thus, by embedding images in hyperbolic space we can naturally organize images based on their semantic similarity and prototypicality. 4.2 SPHERE PACKING IN HYPERBOLIC SPACE Given n particles, our goal is to pack the particles into a two-dimensional hyperbolic space as densely as possible. We derive a simple repulsion loss function to encourage the particles to be equally distant from each other. The loss is derived via the following steps. First, we need to determine the radius of the Poincaré ball used for packing. We use a curvature of 1.0 so the radius of the Poincaré ball is 1.0. The whole Poincaré ball cannot be used for packing since the volume is infinite. We use r < 1 to denote the actual radius used for packing. Thus, our goal is to pack n particles in a compact subspace of Poincaré ball. Then, the Euclidean radius r is further converted into hyperbolic radius rB. Let s = 1√ c , where c is the curvature. The relation between r and rB is rB = s log s+rs−r . Next, the total hyperbolic area AB of a Poincaré ball of radius rB can be computed as AB = 4πs2 sinh2( rB2s ), where sinh is the hyperbolic sine function. Finally, the area per point An can be easily computed as ABn , where n is the total number of particles. Given An, the radius per point can be computed as rn = 2s sinh −1( √ An 4πs2 ). We use the following loss to generate uniform packing in hyperbolic space. Given two particles i and j, the repulsion loss V is defined as, V (i, j; k, n, r) = { 1 [2rn −max(0, 2rn − dB(i, j))]k − 1 (2rn)k } · C(k) (3) where C(k) = (2rn) k+1 k and k is a hyperparameter. Intuitively, if the particle i and the particle j are within 2rn, the repulsion loss is positive. Minimizing the repulsion loss would push the particle i and j away. If the repulsion is zero, this indicates all the particles are equally distant (Figure 4 a). Figure 4 b) shows that the repulsion loss grows significantly when the two particles become close. We also adopt the following boundary loss to prevent the particles from escaping the ball, B(i; r) = max(0, normi − r + margin) (4) where normi is the ℓ2 norm of the representation of the particle i. Figure 3 b) shows an example of the generated particles that are uniformly packed in hyperbolic space. 4.3 HYPERBOLIC INSTANCE ASSIGNMENT HACK learns the features by optimizing the assignments of the images to the particles (Figure 5). Once we generate a fixed set of uniformly packed particles in a two-dimensional hyperbolic space, our next goal is to assign each image to the corresponding particle. The assignment should be one-to-one, that is, each image should be assigned to one particle and each particle is allowed to be associated with only one image. We cast the instance assignment problem as a bipartite matching problem (Gibbons, 1985) and solve it Hungarian algorithm (Munkres, 1957). Figure 5: HACK conducts unsupervised learning in hyperbolic space with sphere packing. The images are mapped to particles by minimizing the total hyperbolic distance. HACK learns features that can capture both visual similarities and prototypicality. Algorithm 1 HACK: Unsupervised Learning in Hyperbolic Space. Require: # of images: n ≥ 0. Radius for packing: r < 1. An encoder with parameters θ: fθ 1: Generate uniformly distributed particles in hyperbolic space by minimizing the repulsion loss in Equation 3 2: Given {(x1, s1), (x2, s2), ..., (xb, sb)}, optimize fθ by minimizing the total hyperbolic distance via Hungarian algorithm. Initially, we randomly assign the particles to the images, thus there is a random one-to-one correspondence between the images to the particles (not optimized). Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle, and an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. We aim to find the minimum cost bipartite matching of the images to the particles within this batch. It is worth noting that no labels are needed and the assignment is done without supervision. In the bipartite matching, the cost is the hyperbolic distance of each image to the particle. Thus, the criterion is to minimize the total hyperbolic distances of the assignment. We achieve this goal with Hungarian algorithm Munkres (1957) which has a complexity of O(b3), where b is the batch size. It is worth noting that the assignment is only limited to the samples in the particular batch, thus the time and memory complexity is tolerable. The one-to-one correspondence between the images and particles are always maintained during training. The details of HACK is shown in Algorithm 1. Due to the property of hyperbolic distance, the images that are more typical tend to be assigned to the particles located in the center of the Poincaré ball. Thus, HACK implicitly defines prototypicality as the distance of the sample to all the other samples. The prototypicality of the images can be easily reflected by the location of the assigned particles. Moreover, similar images tend to cluster together due to semantic similarity. In summary, with hyperbolic instance assignment, HACK automatically organizes images based on prototypicality by exploiting hyperbolicity of the space. Why Does HACK Work? Hyperbolic space can embed tree structure with no distortion. In particular, the root of the tree can be embedded in the center of of the Poincaré ball and the leaves are embedded close to the boundary. Thus, the root is close to all the other nodes. This agrees with our intuition that typical examples should be close to all other examples. By minimizing the total assignment loss of the images to the particles, we seek to organize the images implicitly in a tree-structure manner. Consider three images A, B, C for an example. Assume image A is the most typical image. Thus the feature of A is close to both the features of B and C. The bipartite matching tends to assign image A to the particle in the center since this naturally reflects the feature distances between the three images. Connection to Existing Methods. Existing works address the problem of prototypicality discovery with ad-hoc defined metrics (Carlini et al., 2018). These metrics usually have high-variances due to different training setups or hyperparameters. In this paper, we take a different perspective by exploiting the natural organization of the data by optimizing hyperbolic instance assignment. The property of hyperbolic space facilitates discovery of prototypicality. Also, popular contrastive learning based unsupervised learning methods such as SimCLR (Chen et al., 2020) and MoCo (He et al., 2020) cannot achieve this goal since the predefined structure is not specified. 5 EXPERIMENTS We design several experiments to show the effectiveness of HACK for semantic and prototypical organization. First, we first construct a dataset with known prototypicality using the congealing algorithm (Miller et al., 2000). Then, we apply HACK to datasets with unknown prototypicality to organize the samples based on the semantic and prototypical structure. Finally, we show that the prototypical structure can be used to reduce sample complexity and increase model robustness. 5.1 DATASETS We first construct a dataset called Congealed MNIST. To verify the efficacy of HACK for unsupervised prototypicality discovery, we need a benchmark with known prototypical examples. However, currently there is no standard benchmark for this purpose. To construct the benchmark, we use the congealing algorithm from Miller et al. (2000) to align the images in each class of MNIST (LeCun, 1998). The congealing algorithm is initially used for one-shot classification. During congealing, the images are brought into correspondence with each other jointly. The congealed images are more prototypical: they are better aligned with the average image. In Figure 2, we show the original images and the images after congealing. The original images are transformed via affine transformation to better align with each other. The synthetic data is generated by replacing 500 original images with the corresponding congealed images. In Section E of the Appendix, we show the results of changing the number of replaced original images. We expect HACK to discover the congealed images and place them in the center of the Poincaré ball. We also aim to discover the prototypical examples from each class of the standard MNIST dataset (LeCun, 1998) and CIFAR10 (Krizhevsky et al., 2009). CIFAR10 consists of 60000 from 10 object categories ranging from airplane to truck. CIFAR10 is more challenging than MNIST since it has larger intra-class variations. 5.2 BASELINES We consider several existing metrics proposed in Carlini et al. (2018) for prototypicality discovery, the details can be found in Section C of the Appendix. Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. 5.3 IMPLEMENTATION DETAILS We implement HACK in Pytorch and the code will be made public. To generate the uniform particles, we first randomly initialize the particles. We run the training for 1000 epochs to minimize the repulsion loss and boundary loss. The learning rate is 0.01. The curvature of the Poincaré ball is 1.0 and the r is 0.76 which is used to alleviate the numerical issues (Guo et al., 2021b). The hyperparameter k is 1.55 which is shown to generate uniform particles well. For the assignment, we use a LeNet (LeCun et al., 1998) for MNIST and a ResNet20 (He et al., 2016) for CIFAR10 as the encoder. We apply HACK to each class separately. We attach a fully connected layer to project the feature into a two-dimensional Euclidean space. The image features are further projected onto hyperbolic space via an exponential map. We run the training for 200 epochs and the initial learning rate is 0.1. We use a cosine learning rate scheduler (Loshchilov & Hutter, 2016). We optimize the assignment every other epoch. All the experiments are run on a NVIDIA TITAN RTX GPU. 5.4 PROTOTYPICALITY DISCOVERY ON CONGEALED MNIST Figure 6 shows that HACK can discover the congealed images from all the images. In Figure 6 a), the red particles denote the congealed images and cyan particles denote the original images. We can observe that the congealed images are assigned to the particles that locate in the center of the Poincaré ball. This verifies that HACK can indeed discover prototypical examples from the original dataset. Section G.1 in the Appendix shows that during training the features of atypical examples gradually move to the boundary of the Poincaré ball. In Figure 6 b), we show the actual images that are embedded in the two-dimensional hyperbolic space. We can observe that the images in the center of Poincaré ball are more prototypical and images close to the boundary are more atypical. Also, the images are naturally organized by their semantic similarity. Figure 7 shows that the features of the original images become closer to the center of Poincaré ball after congealing. In summary, HACK can discover prototypicality and also organizes the images based on their semantics. To the best of our knowledge, this is the first unsupervised learning method that can be used to discover prototypical examples in a data-driven fashion. 5.5 RESULTS ON STANDARD BENCHMARKS Figure 8 shows the embedding of class 0 from MNIST and class “airplane” from CIFAR10 in the hyperbolic space. We sample 2000 images from MNIST and CIFAR10 for better visualization. We also show the arrangement of the images angularly with different angles. Radially, we can observe that images are arranged based on prototypicality. The prototypical images tend to locate in the center of the Poincaré ball. Especially for CIFAR10, the images become blurry and even unrecognizable as we move towards the boundary of the ball. Angularly, the images are arranged based on visual similarity. The visual similarity of images has a smooth transition as we move around angularly. Please see Section D for more results. Comparison with Baselines Figure 11 shows the comparison of the baselines with HACK. We can observe that both HACK and Model Confidence (MC) can discover typical and atypical images. Compared with MC, HACK defines prototypicality as the distance of the sample to other samples which is more aligned with human intuition. Moreover, in addition to prototypicality, HACK can also be used to organize examples by semantic similarities. Holdout Retraining (HR) is not effective for prototypicality discovery due to the randomness of model training. 5.6 APPLICATION OF PROTOTYPICALITY Reducing Sample Complexity. The proposed HACK can discover prototypical images as well as atypical images. We show that with atypical images we can reduce the sample complexity for training the model. Prototypical images are representative of the dataset but lack variations. Atypical examples contain more variations and it is intuitive that models trained on atypical examples should generalize better to the test samples. To verify this hypothesis, we select a subset of samples based on the norm of the features which indicates prototypicality of the examples. We consider using both the most typical and atypical examples for training the model. We train a LeNet on MNIST for 10 epochs with a learning rate of 0.1. Figure 9 a) shows that training with atypical images can achieve much higher accuracy than training with typical images. In particular, training with the most atypical 10% of the images achieves 16.54% higher accuracy than with the most typical 10% of the images. Thus, HACK provides an easy solution to reduce sample complexity. The results further verify that HACK can distinguish between prototypical and atypical examples. Increasing Model Robustness. Training models with atypical examples can lead to vulnerable model to adversarial attacks (Liu et al., 2018; Carlini et al., 2018). Intuitively, atypical examples lead to less smooth decision boundary and a small perturbation to the example is likely to change the prediction. With HACK, we can easily identify atypical samples to improve the robustness of the model. We use MNIST as the benchmark and use FGSM (Goodfellow et al., 2014) to attack the model with an ϵ = 0.07. We identify the atypical examples with HACK and remove the most atypical X% of the examples. Figure 9 b) shows that discarding atypically examples greatly improve the robustness of the model: the adversarial accuracy is improved from 84.72% to 93.42% by discarding the most atypical 1% of the examples. It is worth noting that the clean accuracy remains the same after removing a small number of atypical examples. 6 SUMMARY We propose an unsupervised learning method, called HACK, for organizing images with sphere packing in hyperbolic space. HACK optimizes the assignments of the images to a fixed set of uniformly distributed particles. Prototypical and semantic structures emerge naturally due to the property of hyperbolic distance. We apply HACK to synthetic data with known prototypicality and standard image datasets. The discovered prototypicality and atypical examples can be used to reduce sample complexity and increase model robustness. A APPENDIX B MORE DETAILS ON HYPERBOLIC INSTANCE ASSIGNMENT A more detailed description of the hyperbolic instance assignment is given. Initially, we randomly assign the particles to the images. Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle. Given an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. we aim to find the minimum cost bipartite matching of the images to the particles. The cost to minimize is the total hyperbolic distance of the hyperbolic features to the particles. We first compute all the pairwise distances between the hyperbolic features and the particles. This is the cost matrix of the bipartite graph. Then we use Hungarian algorithm to optimize the assignment (Figure 12). Suppose we train the encoder fθ for T epochs. We run the hyperbolic instance assignment every other epoch to avoid instability during training. We optimize the encoder fθ to minimize the hyperbolic distance of the hyperbolic feature to the assigned particle in each batch. C DETAILS OF BASELINES Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. In Holdout Retraining, multiple models are trained on the same dataset. The distances of the features of the images obtained from different models are computed and ranked. The prototypical examples are those examples with closest feature distance. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. Once we train a model on the dataset, we use the confidence of the model to rank the examples. The prototypical examples are those examples that the model is most D MORE RESULTS ON PROTOTYPICALITY DISCOVERY We show the visualization of all the images in Figure 17 and Figure 18. The images are organized naturally based their prototypicality and semantic similarity. We further conduct retrieval based on the norm of the hyperbolic features to extract the most typical and atypical images on CIAFR10 in Figure 19. The hyperbolic features with large norms correspond to atypical images and the hyperbolic features with small norms correspond to typical images. It can be observed that the object in the atypical images are not visible. E GRADUALLY ADDING MORE CONGEALED IMAGES We gradually increase the number of original images replaced by congealed images from 100 to 500. Still, as shown in Figure 13, HACK can learn representation that capture the concept of prototypicality regardless of the number of congealed images. This again confirms that the effectiveness of HACK for discovering prototypicality. 100 200 300 400 500 F DIFFERENT RANDOM SEEDS We further run the assignment for 5 times with different random seeds. The results are shown in Figure 14. We observe that the algorithm does not suffer from high variance and the congealed images are always assigned to the particles in the center of the Poincaré ball. This further confirms the efficacy of the proposed method for discovering prototypicality. G EMERGENCE OF PROTOTYPICALITY IN THE FEATURE SPACE Existing unsupervised learning methods mainly focus on learning features for differentiating different classes or samples Wu et al. (2018); He et al. (2020); Chen et al. (2020). The learned representations are transferred to various downstream tasks such as segmentation and detection. In contrast, the features learned by HACK aim at capturing prototypicality within a single class. To investigate the effectiveness of HACK for revealing prototypicality, we can include or exclude congealed images in the training process. When the congealed images are included in the training process, we expect the congealed images to be located in the center of the Poincaré ball while the original images to be located near the boundary of the Poincaré ball. When the congealed images are excluded from the training process, we expect the features of congealed images produced via the trained network are located in the center of the Poincaré ball. G.1 TRAINING WITH CONGEALED IMAGES AND ORIGINAL IMAGES We follow the same setups as in the Section 4.3.1 of the main text. Figure 15 shows the hyperbolic features of the congealed images and original images in different training epochs. The features of the congealed images stay in the center of the Poincaré ball while the features of the original images gradually expand to the boundary. G.2 TRAINING ONLY WITH ORIGINAL IMAGES Figure 16 shows the hyperbolic features of the congealed images when the model is trained only with original images. As we have shown before, congealed images are naturally more typical than their corresponding original images since they are aligned with the average image. The features of congealed images are all located close to the center of the Poincaré ball. This demonstrate that prototypicality naturally emerge in the feature space. Without using congealed images during training, we exclude any artifacts and further confirm the effectiveness of HACK for discovering prototypicality. We also observe that the features produced by HACK also capture the fine-grained similarities among the congealing images despite the fact that all the images are aligned with the average image. H DISCUSSIONS ON SOCIETAL IMPACT AND LIMITATIONS. We address the problem of unsupervised learning in hyperbolic space. We believe the proposed HACK should not raise any ethical considerations. We discuss current limitations below, Applying to the Whole Dataset Currently, HACK is applied to each class separately. Thus, it would be interesting to apply HACK to all the classes at once without supervision. This is much more challenging since we need to differentiate between examples from different classes as well as the prototypical and semantic structure. Exploring other Geometrical Structures We consider uniform packing in hyperbolic space to organize the images. It is also possible to extend HACK by specifying other geometrical structures to encourage the corresponding organization to emerge from the dataset.
1. What is the main contribution of the paper regarding prototype extraction? 2. What are the strengths and weaknesses of the proposed algorithm, particularly HACK? 3. Do you have any concerns or questions about the motivation behind sphere packing and Hungarian matching? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any comparisons with hyperbolic VAEs in the experimental sections? 6. How does the encoder network used in the experiments consider the geometric properties of hyperbolic spaces? 7. Is the use case of atypical examples convincing enough? 8. Why are congealed images considered ground truth for typical examples? 9. How do the two proposed applications in the experiments relate to each other?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new algorithm called HACK to extract prototypical examples from a given dataset. The sphere packing algorithm is first proposed to obtain particles over the Poincare ball model. The data points are then encoded into the hyperbolic space via an encoder network. Then, the Hungarian algorithm is applied to find the mapping between particles and latent representations. Through the experiments, it is verified that typical examples are located close to the origin of the Poincare model, and atypical examples are located near the boundary. The classification model trained with atypical examples shows better classification accuracy and the model trained with typical examples shows better adversarial defense ability. Strengths And Weaknesses Strength The paper proposes a sphere packing algorithm for the Poincare disk model. Many exemplar figures help understand the paper easily. Weaknesses The motivation for sphere packing and Hungarian matching is unclear. We already observed that the hyperbolicVAE could capture the hierarchical structure of the dataset, and often the data points near the center of the Poincare model show similar patterns with prototypical examples. HyperbolicVAEs also try to spread the latent variables across the Poincare disk in order to minimize the reconstruction loss, therefore, it implicitly performs the sphere packing in some sense as well. I encourage to have a comparison with the hyperbolic VAEs in the experimental sections. The encoder network used in the experiments does not take into account the geometric properties of hyperbolic spaces. The model outputs a latent vector defined in Euclidean space, which is then used as a vector in the Poincare ball. There are some known encoder models that take the geometric structure of hyperbolic spaces into account (c.f. hyperbolicVAEs). The use case of atypical examples is not convincing. In the sample complexity experiments, atypical examples are used to reduce the sample complexity of a classification model. On the other hand, the dataset condensation is already well-studied. Therefore, to better highlight the potential application of atypical examples, one must provide a comparison between data condensation and atypical example learning. The congealed images are considered as ground truth of typical examples, which I couldn't find why. The two applications proposed in the experiments sound contradict each other. The first use case encourages the use of atypical examples whereas the second use case encourages the use of typical examples. Clarity, Quality, Novelty And Reproducibility This paper is well-written and easy to follow. The circles in Figure 4 (a) should not be drawn like that if the circle represents the Poincare ball.
ICLR
Title The Emergence of Prototypicality: Unsupervised Feature Learning in Hyperbolic Space Abstract Prototypicality is extensively studied in machine learning and computer vision. However, there is still no widely accepted definition of prototypicality. In this paper, we first propose to define prototypicality based on the concept of congealing. Then, we develop a novel method called HACK to automatically discover prototypical examples from the dataset. HACK conducts unsupervised prototypicality learning in Hyperbolic space with sphere pACKing. HACK first generates uniformly packed particles in the Poincaré ball of hyperbolic space and then assigns the image uniquely to each particle. Due to the geometrical property of hyperbolic space, prototypical examples naturally emerge and tend to locate in the center of the Poincaré ball. HACK naturally leverages hyperbolic space to discover prototypical examples in a data-driven fashion. We verify the effectiveness of the method with synthetic dataset and natural image datasets. Extensive experiments show that HACK can naturally discover the prototypical examples without supervision. The discovered prototypical examples and atypical examples can be used to reduce sample complexity and increase model robustness. 1 INTRODUCTION Not all instances are created equal. Some instances are more representative of the class and some instances are outliers or anomalies. Representative examples can be viewed as prototypes and used for interpretable machine learning (Bien & Tibshirani, 2011), curriculum learning (Bengio et al., 2009) and learning better decision boundaries (Carlini et al., 2018). With prototypical examples, we can also conduct classification with few or even one example (Miller et al., 2000). Given an image dataset, thus it is desirable to organize the examples based on prototypicality. If the features of the images are given, it is relatively easy to find the prototypes by examining the density peaks of the feature distribution. If the features are not given, to discover prototypical examples without supervision is difficult: there is no universal definition or simple metric to assess the prototypicality of the examples. A naive method to address this problem is to examine the gradient magnitude (Carlini et al., 2018). However, this approach is shown to have a high variance which is resulted from different training setups (Carlini et al., 2018). Some methods address this problem from the perspective of adversarial robustness (Stock & Cisse, 2018; Carlini et al., 2018): prototypical examples should be more adversarially robust. However, the selection of the prototypical examples highly depends on the adversarial method and the metric used in adversarial attack. Several other methods exist for this problem but they are either based on heuristics or lack a proper justification (Carlini et al., 2018). In this paper, we first introduce a way of obtaining prototypical examples from image congealing (Miller et al., 2000). Congealing is the process of jointly aligning a set of images. The congealed images are transformed to better align with the average image and thus more typical. We further propose a novel method, called HACK, by leveraging the geometry of hyperbolic space for unsupervised learning. Hyperbolic space is non-Euclidean space with constant non-negative curvature Anderson (2006). Different from Euclidean space, hyperbolic space can represent hierarchical relation with low distortion. Poincaré ball model is one of the most commonly used models for hyperbolic space (Nickel & Kiela, 2017b). One notable property of Poincaré ball model is that the distance to the origin grows exponentially as we move towards the boundary. Thus, the points located in the center of the ball are close to all the other points while the points located close to the boundary are infinitely Figure 1: Different from the existing unsupervised learning methods which aim to group examples via semantic similarity, HACK organizes images in hyperbolic space in a hierarchical manner. The typical images are at the center of the Poincaré ball and the atypical images are close to the boundary of the Poincaré ball. far away from other points. With unsupervised learning in hyperbolic space, HACK can learn features which capture both visual similarity and prototypicality(Figure 1). HACK optimizes the organization of the dataset by assigning the images to a set of uniformly distributed particles in hyperbolic space. The assignment is done by minimizing the total hyperbolic distance between the image features and the particles via Hungarian algorithm. The prototypicality arises naturally based on the distance of the example to other examples. Prototypical examples tend to locate in the center of the Poincaré ball and atypical examples tend to locate close to the boundary. Hyperbolic space readily facilitates such an organization due to property of the hyperbolic distance. In summary, the contributions of the papers are, • We propose the first unsupervised feature learning method to learn features which capture both visual similarity and prototypicality. The positions of the features reflect prototypicality of the examples. • The proposed method HACK assigns images to particles that are uniformly packed in hyperbolic space. HACK fully exploits the property of hyperbolic space and prototypicality arises naturally. • We ground the concept of prototypicality based on congealing which conforms to human visual perception. The congealed examples can be used to replace the original examples for constructing datasets with known prototypicality. We validate the effectiveness of the method by using a synthetic data with natural and congealed images. We further apply the proposed method to commonly used image datasets to reveal prototypicality. • The discovered prototypical and atypical examples are shown to reduce sample complexity and increase robustness of the model. 2 RELATED WORK Prototypicality. The study of prototypical examples in machine learning has a long history. In Zhang (1992), the authors select typical instances based on the fact that typical instances should be representative of the cluster. In Kim et al. (2016), prototypical examples are defined as the examples that have minimum maximum mean discrepancy within the data. Li et al. (Li et al., 2018) propose to discover prototypical examples by architectural modifications: the dataset is first projected onto a low-dimensional manifold and a prototype layer is used to minimize the distance between inputs and the prototypes on the manifold. The robustness to adversarial attacks are also used as a criteria for prototypicality (Stock & Cisse, 2018). In Carlini et al. (2018), the authors propose multiple metrics for prototypicality discovery. For example, the features of prototypical examples should be consistent across different training setups. However, these metrics usually depend heavily on the training setups and hyperparameters used for training. The idea of prototypicality is also extensively studied in meta-learning for one-shot or few-shot classification (Snell et al., 2017). No existing works address the prototypicality discovery problem in a data-driven fashion. Our proposed HACK naturally exploits hyperbolic space to organize the images based on prototypicality. Unsupervised Learning in Hyperbolic Space. Learning features in hyperbolic space has shown to be useful for many machine learning problems (Nickel & Kiela, 2017a; Ganea et al., 2018). One useful property is that hierarchical relations can be embedded in hyperbolic space with low distortion (Nickel & Kiela, 2017a). A generalized version of the normal distribution called wrapped normal distribution is proposed for modeling distribution of points in hyperbolic space (Nagano et al., 2019). The proposed wrapped normal distribution is used as the latent space for constructing hyperbolic variational autoencoders (VAEs) (Kingma & Welling, 2013). Poincaré VAEs is constructed in Mathieu et al. (2019) with a similar idea to Nagano et al. (2019) by replacing the standard normal distribution with hyperbolic normal distribution. Unsupervised 3D segmentation (Hsu et al., 2020) and instance segmentation (Weng et al., 2021) are conducted in hyperbolic space via hierarchical hyperbolic triplet loss. CO-SNE (Guo et al., 2021a) is recently proposed to visualize high-dimensional hyperbolic features in a two-dimensional hyperbolic space. Although hyperbolic distance facilitates the learning of hierarchical structure, how to leverage hyperbolic space for unsupervised prototypicality discovery is not explored in the current literature. Sphere Packing. The problem of sphere packing is to pack a set of particles as densely as possible in a space (Conway & Sloane, 2013). Sphere packing can be served as a toy model for granular materials and has applications in information theory (Shannon, 2001) to find error-correcting codes (Cohn, 2016). Sphere packing is difficult due to multiple local minima, the curse of high-dimensionality and complicated geometrical configurations. Packing in hyperbolic space is also studied in the literature. It is given in Böröczky (1978) a universal upper bound for the density of sphere packing in an n-dimensional hyperbolic space when n ≥ 2. We are interested in generating uniform packing in a two-dimensional hyperbolic space. Uniformity has been shown to be a useful criterion for learning good features on the hypersphere (Wang & Isola, 2020). We opt to find the configuration with an optimization procedure which is easily applicable even with thousands of particles. 3 OVERVIEW Given existing features {f(vi)} which are obtained by applying a feature extractor for each instance vi, we can find the prototypical examples by examining the density peaks via techniques from density estimation. For example, the K-nearest neighbor density (K-NN) estimation (Fix & Hodges, 1989) is defined as, pknn(vi, k) = k n 1 Ad ·Dd(vi, vk(i)) (1) where d is the feature dimension, Ad = πd/2/Γ(d/2+1), Γ(x) is the Gamma function and k(i) is the kth nearest neighbor of example vi. The nearest neighbors can be found by computing the distance between the features. However, different training setups can induce different feature spaces, which in turn lead to different conclusions of prototypicality. Our goal is to learn features that naturally reflect prototypicality of the examples. We ground our concept of prototypicality based on congealing (Miller et al., 2000). In particular, we define prototypical examples in the pixel space by examining the distance of the images to the average image in the corresponding class. Our idea is based on a traditional computer vision technique called image alignment (Szeliski et al., 2007) which aims to find correspondences across images. During congealing (Miller et al., 2000), a set of images are transformed to be jointly aligned by minimizing the joint pixelwise entropies. The congealed images are more prototypical: they are better aligned with the average image. Thus, we have a simple way to transform an atypical example to a typical example (see Figure 2). This is useful since given an unlabeled image dataset the typicality of the examples are unknown, congealing examples can be naturally served as examples with known typicality and be used as a validation for the effectiveness of our method. 4 UNSUPERVISED FEATURE REPRESENTATION IN HYPERBOLIC SPACE We aim to develop a method which can automatically discover prototypical examples unsupervisedly. In particular, we conduct unsupervised learning in hyperbolic space with sphere packing (Figure 5). We specify where the targets should be located ahead of training with uniform packing in hyperbolic space, which by design are maximally evenly spread out in hyperbolic space. The uniformly distributed particles guide feature learning to achieve maximum instance discrimination (Wu et al., 2018). HACK figures out which instance should be mapped to which target through bipartite graph matching as a global optimization procedure. During training HACK minimizes the total hyperbolic distances between the mapped image point (in the feature space) and the target, those that are more typical naturally emerge closer to the origin of Poincaré ball. Prototypicality comes for free as a result of self-organization. HACK differs from the existing learning methods in several aspects (Figure 3). Different from supervised learning, HACK allows the image to be assigned to any target (particle). This enables exploration of natural organizations of the data. Different from existing unsupervised learning learning method, HACK specifies a predefinted geometrical organization which encourages the corresponding structure to be emerged from the dataset. Existing methods are not applicable for prototypicality discovery without supervision due to their aforementioned limitations. Section 4.1 gives the background on hyperbolic space. Section 4.2 describes the steps for generating uniformly distributed particles in hyperbolic space. Section 4.3 delineates the details of hyperbolic instance assignment via Hungarian algorithm. 4.1 POINCARÉ BALL MODEL FOR HYPERBOLIC SPACE Hyperbolic space. Euclidean space has a curvature of zero and a hyperbolic space is a Riemannian manifold with a constant negative curvature. Poincaré Ball Model for Hyperbolic Space. There are several isometrically equivalent models for visualizing hyperbolic space with Euclidean representation. The Poincaré ball model is the commonly used one in hyperbolic representation learning (Nickel & Kiela, 2017b). The n-dimensional Poincaré ball model is defined as (Bn, gx), where Bn = {x ∈ Rn : ∥x∥ < 1} and gx = (γx)2In is the Riemannian metric tensor. γx = 21−∥x∥2 is the conformal factor and In is the Euclidean metric tensor. Hyperbolic Distance. Given two points u ∈ Bn and v ∈ Bn, the hyperbolic distance is defined as, dBn(u,v) = arcosh ( 1 + 2 ∥u− v∥2 (1− ∥u∥2)(1− ∥v∥2) ) (2) where arcosh is the inverse hyperbolic cosine function and ∥·∥ is the usual Euclidean norm. Figure 4: The proposed repulsion loss is used to generate uniformly packed particles in hyperbolic space. (a) If the distance between two particles are within rn,r, minimizing the repulsion loss would push the two particles away. (b) The repulsion loss is larger when the two particles become closer. (a) (b) Hyperbolic distance has the unique property that it grows exponentially as we move towards the boundary of the Poincaré ball. In particular, the points on the circle represents points in the infinity. Hyperbolic space is naturally suitable for embedding hierarchical structure (Sarkar, 2011; Nickel & Kiela, 2017b) and can be regarded as a continuous representation of trees (Chami et al., 2020). The hyperbolic distance between samples implicitly reflects their hierarchical relation. Thus, by embedding images in hyperbolic space we can naturally organize images based on their semantic similarity and prototypicality. 4.2 SPHERE PACKING IN HYPERBOLIC SPACE Given n particles, our goal is to pack the particles into a two-dimensional hyperbolic space as densely as possible. We derive a simple repulsion loss function to encourage the particles to be equally distant from each other. The loss is derived via the following steps. First, we need to determine the radius of the Poincaré ball used for packing. We use a curvature of 1.0 so the radius of the Poincaré ball is 1.0. The whole Poincaré ball cannot be used for packing since the volume is infinite. We use r < 1 to denote the actual radius used for packing. Thus, our goal is to pack n particles in a compact subspace of Poincaré ball. Then, the Euclidean radius r is further converted into hyperbolic radius rB. Let s = 1√ c , where c is the curvature. The relation between r and rB is rB = s log s+rs−r . Next, the total hyperbolic area AB of a Poincaré ball of radius rB can be computed as AB = 4πs2 sinh2( rB2s ), where sinh is the hyperbolic sine function. Finally, the area per point An can be easily computed as ABn , where n is the total number of particles. Given An, the radius per point can be computed as rn = 2s sinh −1( √ An 4πs2 ). We use the following loss to generate uniform packing in hyperbolic space. Given two particles i and j, the repulsion loss V is defined as, V (i, j; k, n, r) = { 1 [2rn −max(0, 2rn − dB(i, j))]k − 1 (2rn)k } · C(k) (3) where C(k) = (2rn) k+1 k and k is a hyperparameter. Intuitively, if the particle i and the particle j are within 2rn, the repulsion loss is positive. Minimizing the repulsion loss would push the particle i and j away. If the repulsion is zero, this indicates all the particles are equally distant (Figure 4 a). Figure 4 b) shows that the repulsion loss grows significantly when the two particles become close. We also adopt the following boundary loss to prevent the particles from escaping the ball, B(i; r) = max(0, normi − r + margin) (4) where normi is the ℓ2 norm of the representation of the particle i. Figure 3 b) shows an example of the generated particles that are uniformly packed in hyperbolic space. 4.3 HYPERBOLIC INSTANCE ASSIGNMENT HACK learns the features by optimizing the assignments of the images to the particles (Figure 5). Once we generate a fixed set of uniformly packed particles in a two-dimensional hyperbolic space, our next goal is to assign each image to the corresponding particle. The assignment should be one-to-one, that is, each image should be assigned to one particle and each particle is allowed to be associated with only one image. We cast the instance assignment problem as a bipartite matching problem (Gibbons, 1985) and solve it Hungarian algorithm (Munkres, 1957). Figure 5: HACK conducts unsupervised learning in hyperbolic space with sphere packing. The images are mapped to particles by minimizing the total hyperbolic distance. HACK learns features that can capture both visual similarities and prototypicality. Algorithm 1 HACK: Unsupervised Learning in Hyperbolic Space. Require: # of images: n ≥ 0. Radius for packing: r < 1. An encoder with parameters θ: fθ 1: Generate uniformly distributed particles in hyperbolic space by minimizing the repulsion loss in Equation 3 2: Given {(x1, s1), (x2, s2), ..., (xb, sb)}, optimize fθ by minimizing the total hyperbolic distance via Hungarian algorithm. Initially, we randomly assign the particles to the images, thus there is a random one-to-one correspondence between the images to the particles (not optimized). Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle, and an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. We aim to find the minimum cost bipartite matching of the images to the particles within this batch. It is worth noting that no labels are needed and the assignment is done without supervision. In the bipartite matching, the cost is the hyperbolic distance of each image to the particle. Thus, the criterion is to minimize the total hyperbolic distances of the assignment. We achieve this goal with Hungarian algorithm Munkres (1957) which has a complexity of O(b3), where b is the batch size. It is worth noting that the assignment is only limited to the samples in the particular batch, thus the time and memory complexity is tolerable. The one-to-one correspondence between the images and particles are always maintained during training. The details of HACK is shown in Algorithm 1. Due to the property of hyperbolic distance, the images that are more typical tend to be assigned to the particles located in the center of the Poincaré ball. Thus, HACK implicitly defines prototypicality as the distance of the sample to all the other samples. The prototypicality of the images can be easily reflected by the location of the assigned particles. Moreover, similar images tend to cluster together due to semantic similarity. In summary, with hyperbolic instance assignment, HACK automatically organizes images based on prototypicality by exploiting hyperbolicity of the space. Why Does HACK Work? Hyperbolic space can embed tree structure with no distortion. In particular, the root of the tree can be embedded in the center of of the Poincaré ball and the leaves are embedded close to the boundary. Thus, the root is close to all the other nodes. This agrees with our intuition that typical examples should be close to all other examples. By minimizing the total assignment loss of the images to the particles, we seek to organize the images implicitly in a tree-structure manner. Consider three images A, B, C for an example. Assume image A is the most typical image. Thus the feature of A is close to both the features of B and C. The bipartite matching tends to assign image A to the particle in the center since this naturally reflects the feature distances between the three images. Connection to Existing Methods. Existing works address the problem of prototypicality discovery with ad-hoc defined metrics (Carlini et al., 2018). These metrics usually have high-variances due to different training setups or hyperparameters. In this paper, we take a different perspective by exploiting the natural organization of the data by optimizing hyperbolic instance assignment. The property of hyperbolic space facilitates discovery of prototypicality. Also, popular contrastive learning based unsupervised learning methods such as SimCLR (Chen et al., 2020) and MoCo (He et al., 2020) cannot achieve this goal since the predefined structure is not specified. 5 EXPERIMENTS We design several experiments to show the effectiveness of HACK for semantic and prototypical organization. First, we first construct a dataset with known prototypicality using the congealing algorithm (Miller et al., 2000). Then, we apply HACK to datasets with unknown prototypicality to organize the samples based on the semantic and prototypical structure. Finally, we show that the prototypical structure can be used to reduce sample complexity and increase model robustness. 5.1 DATASETS We first construct a dataset called Congealed MNIST. To verify the efficacy of HACK for unsupervised prototypicality discovery, we need a benchmark with known prototypical examples. However, currently there is no standard benchmark for this purpose. To construct the benchmark, we use the congealing algorithm from Miller et al. (2000) to align the images in each class of MNIST (LeCun, 1998). The congealing algorithm is initially used for one-shot classification. During congealing, the images are brought into correspondence with each other jointly. The congealed images are more prototypical: they are better aligned with the average image. In Figure 2, we show the original images and the images after congealing. The original images are transformed via affine transformation to better align with each other. The synthetic data is generated by replacing 500 original images with the corresponding congealed images. In Section E of the Appendix, we show the results of changing the number of replaced original images. We expect HACK to discover the congealed images and place them in the center of the Poincaré ball. We also aim to discover the prototypical examples from each class of the standard MNIST dataset (LeCun, 1998) and CIFAR10 (Krizhevsky et al., 2009). CIFAR10 consists of 60000 from 10 object categories ranging from airplane to truck. CIFAR10 is more challenging than MNIST since it has larger intra-class variations. 5.2 BASELINES We consider several existing metrics proposed in Carlini et al. (2018) for prototypicality discovery, the details can be found in Section C of the Appendix. Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. 5.3 IMPLEMENTATION DETAILS We implement HACK in Pytorch and the code will be made public. To generate the uniform particles, we first randomly initialize the particles. We run the training for 1000 epochs to minimize the repulsion loss and boundary loss. The learning rate is 0.01. The curvature of the Poincaré ball is 1.0 and the r is 0.76 which is used to alleviate the numerical issues (Guo et al., 2021b). The hyperparameter k is 1.55 which is shown to generate uniform particles well. For the assignment, we use a LeNet (LeCun et al., 1998) for MNIST and a ResNet20 (He et al., 2016) for CIFAR10 as the encoder. We apply HACK to each class separately. We attach a fully connected layer to project the feature into a two-dimensional Euclidean space. The image features are further projected onto hyperbolic space via an exponential map. We run the training for 200 epochs and the initial learning rate is 0.1. We use a cosine learning rate scheduler (Loshchilov & Hutter, 2016). We optimize the assignment every other epoch. All the experiments are run on a NVIDIA TITAN RTX GPU. 5.4 PROTOTYPICALITY DISCOVERY ON CONGEALED MNIST Figure 6 shows that HACK can discover the congealed images from all the images. In Figure 6 a), the red particles denote the congealed images and cyan particles denote the original images. We can observe that the congealed images are assigned to the particles that locate in the center of the Poincaré ball. This verifies that HACK can indeed discover prototypical examples from the original dataset. Section G.1 in the Appendix shows that during training the features of atypical examples gradually move to the boundary of the Poincaré ball. In Figure 6 b), we show the actual images that are embedded in the two-dimensional hyperbolic space. We can observe that the images in the center of Poincaré ball are more prototypical and images close to the boundary are more atypical. Also, the images are naturally organized by their semantic similarity. Figure 7 shows that the features of the original images become closer to the center of Poincaré ball after congealing. In summary, HACK can discover prototypicality and also organizes the images based on their semantics. To the best of our knowledge, this is the first unsupervised learning method that can be used to discover prototypical examples in a data-driven fashion. 5.5 RESULTS ON STANDARD BENCHMARKS Figure 8 shows the embedding of class 0 from MNIST and class “airplane” from CIFAR10 in the hyperbolic space. We sample 2000 images from MNIST and CIFAR10 for better visualization. We also show the arrangement of the images angularly with different angles. Radially, we can observe that images are arranged based on prototypicality. The prototypical images tend to locate in the center of the Poincaré ball. Especially for CIFAR10, the images become blurry and even unrecognizable as we move towards the boundary of the ball. Angularly, the images are arranged based on visual similarity. The visual similarity of images has a smooth transition as we move around angularly. Please see Section D for more results. Comparison with Baselines Figure 11 shows the comparison of the baselines with HACK. We can observe that both HACK and Model Confidence (MC) can discover typical and atypical images. Compared with MC, HACK defines prototypicality as the distance of the sample to other samples which is more aligned with human intuition. Moreover, in addition to prototypicality, HACK can also be used to organize examples by semantic similarities. Holdout Retraining (HR) is not effective for prototypicality discovery due to the randomness of model training. 5.6 APPLICATION OF PROTOTYPICALITY Reducing Sample Complexity. The proposed HACK can discover prototypical images as well as atypical images. We show that with atypical images we can reduce the sample complexity for training the model. Prototypical images are representative of the dataset but lack variations. Atypical examples contain more variations and it is intuitive that models trained on atypical examples should generalize better to the test samples. To verify this hypothesis, we select a subset of samples based on the norm of the features which indicates prototypicality of the examples. We consider using both the most typical and atypical examples for training the model. We train a LeNet on MNIST for 10 epochs with a learning rate of 0.1. Figure 9 a) shows that training with atypical images can achieve much higher accuracy than training with typical images. In particular, training with the most atypical 10% of the images achieves 16.54% higher accuracy than with the most typical 10% of the images. Thus, HACK provides an easy solution to reduce sample complexity. The results further verify that HACK can distinguish between prototypical and atypical examples. Increasing Model Robustness. Training models with atypical examples can lead to vulnerable model to adversarial attacks (Liu et al., 2018; Carlini et al., 2018). Intuitively, atypical examples lead to less smooth decision boundary and a small perturbation to the example is likely to change the prediction. With HACK, we can easily identify atypical samples to improve the robustness of the model. We use MNIST as the benchmark and use FGSM (Goodfellow et al., 2014) to attack the model with an ϵ = 0.07. We identify the atypical examples with HACK and remove the most atypical X% of the examples. Figure 9 b) shows that discarding atypically examples greatly improve the robustness of the model: the adversarial accuracy is improved from 84.72% to 93.42% by discarding the most atypical 1% of the examples. It is worth noting that the clean accuracy remains the same after removing a small number of atypical examples. 6 SUMMARY We propose an unsupervised learning method, called HACK, for organizing images with sphere packing in hyperbolic space. HACK optimizes the assignments of the images to a fixed set of uniformly distributed particles. Prototypical and semantic structures emerge naturally due to the property of hyperbolic distance. We apply HACK to synthetic data with known prototypicality and standard image datasets. The discovered prototypicality and atypical examples can be used to reduce sample complexity and increase model robustness. A APPENDIX B MORE DETAILS ON HYPERBOLIC INSTANCE ASSIGNMENT A more detailed description of the hyperbolic instance assignment is given. Initially, we randomly assign the particles to the images. Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle. Given an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. we aim to find the minimum cost bipartite matching of the images to the particles. The cost to minimize is the total hyperbolic distance of the hyperbolic features to the particles. We first compute all the pairwise distances between the hyperbolic features and the particles. This is the cost matrix of the bipartite graph. Then we use Hungarian algorithm to optimize the assignment (Figure 12). Suppose we train the encoder fθ for T epochs. We run the hyperbolic instance assignment every other epoch to avoid instability during training. We optimize the encoder fθ to minimize the hyperbolic distance of the hyperbolic feature to the assigned particle in each batch. C DETAILS OF BASELINES Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. In Holdout Retraining, multiple models are trained on the same dataset. The distances of the features of the images obtained from different models are computed and ranked. The prototypical examples are those examples with closest feature distance. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. Once we train a model on the dataset, we use the confidence of the model to rank the examples. The prototypical examples are those examples that the model is most D MORE RESULTS ON PROTOTYPICALITY DISCOVERY We show the visualization of all the images in Figure 17 and Figure 18. The images are organized naturally based their prototypicality and semantic similarity. We further conduct retrieval based on the norm of the hyperbolic features to extract the most typical and atypical images on CIAFR10 in Figure 19. The hyperbolic features with large norms correspond to atypical images and the hyperbolic features with small norms correspond to typical images. It can be observed that the object in the atypical images are not visible. E GRADUALLY ADDING MORE CONGEALED IMAGES We gradually increase the number of original images replaced by congealed images from 100 to 500. Still, as shown in Figure 13, HACK can learn representation that capture the concept of prototypicality regardless of the number of congealed images. This again confirms that the effectiveness of HACK for discovering prototypicality. 100 200 300 400 500 F DIFFERENT RANDOM SEEDS We further run the assignment for 5 times with different random seeds. The results are shown in Figure 14. We observe that the algorithm does not suffer from high variance and the congealed images are always assigned to the particles in the center of the Poincaré ball. This further confirms the efficacy of the proposed method for discovering prototypicality. G EMERGENCE OF PROTOTYPICALITY IN THE FEATURE SPACE Existing unsupervised learning methods mainly focus on learning features for differentiating different classes or samples Wu et al. (2018); He et al. (2020); Chen et al. (2020). The learned representations are transferred to various downstream tasks such as segmentation and detection. In contrast, the features learned by HACK aim at capturing prototypicality within a single class. To investigate the effectiveness of HACK for revealing prototypicality, we can include or exclude congealed images in the training process. When the congealed images are included in the training process, we expect the congealed images to be located in the center of the Poincaré ball while the original images to be located near the boundary of the Poincaré ball. When the congealed images are excluded from the training process, we expect the features of congealed images produced via the trained network are located in the center of the Poincaré ball. G.1 TRAINING WITH CONGEALED IMAGES AND ORIGINAL IMAGES We follow the same setups as in the Section 4.3.1 of the main text. Figure 15 shows the hyperbolic features of the congealed images and original images in different training epochs. The features of the congealed images stay in the center of the Poincaré ball while the features of the original images gradually expand to the boundary. G.2 TRAINING ONLY WITH ORIGINAL IMAGES Figure 16 shows the hyperbolic features of the congealed images when the model is trained only with original images. As we have shown before, congealed images are naturally more typical than their corresponding original images since they are aligned with the average image. The features of congealed images are all located close to the center of the Poincaré ball. This demonstrate that prototypicality naturally emerge in the feature space. Without using congealed images during training, we exclude any artifacts and further confirm the effectiveness of HACK for discovering prototypicality. We also observe that the features produced by HACK also capture the fine-grained similarities among the congealing images despite the fact that all the images are aligned with the average image. H DISCUSSIONS ON SOCIETAL IMPACT AND LIMITATIONS. We address the problem of unsupervised learning in hyperbolic space. We believe the proposed HACK should not raise any ethical considerations. We discuss current limitations below, Applying to the Whole Dataset Currently, HACK is applied to each class separately. Thus, it would be interesting to apply HACK to all the classes at once without supervision. This is much more challenging since we need to differentiate between examples from different classes as well as the prototypical and semantic structure. Exploring other Geometrical Structures We consider uniform packing in hyperbolic space to organize the images. It is also possible to extend HACK by specifying other geometrical structures to encourage the corresponding organization to emerge from the dataset.
1. What is the main contribution of the paper, and how does it differ from other unsupervised learning approaches? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its experimental validation and dependence on encoder training? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's assumptions, methods, or conclusions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper introduces an unsupervised approach to detect images (or instances in general) that are the most "prototypical" in a given dataset such as MNIST or CIFAR-10. To do this, "particles" or 2-dimensional embeddings in a Poincare ball are first artificially generated so that they are all spread in an almost uniform way (see Section 4.2). Representation of images (using some neural network such as LeNet) are then assigned one-to-one by using the Hungarian algorithm so that the most typical images will be closer to the rest of the images than atypical images. Experiments on MNIST and CIFAR-10 show that typical images tend to be closer to the origin than atypical images. Strengths And Weaknesses Strength: the idea can be seen as a dimensionality reduction approach in some hyperbolic space although particles tend to be spread uniformly in the submission. Weaknesses: In general, the experimental validation on toy datasets such as MNIST and CIFAR-10 is weak. It is not clear how the approach would be useful to the machine learning or computer vision communities. Moreover, the assignments images-to-particles depend mostly on the trained "encoders" (LeNet for MNISTR and ResNet20 for CIFAR-10). However, it is not clear how these encoders are trained or pretrained. Since the approach is unsupervised, is it by using some self-supervised approach such as SIMCLR? I assume that the way these encoders are pretrained has an impact on the assignments, and different pretraining approaches may result in different assignments and different conclusions. typo: in the introduction, it is mentioned that hyperbolic space is a non-Euclidean space of non-negative curvature. It is of negative curvature. Clarity, Quality, Novelty And Reproducibility The method is clear although the way the encoders are trained is not discussed. The motivation of the approach is unclear and the evaluation is weak.
ICLR
Title The Emergence of Prototypicality: Unsupervised Feature Learning in Hyperbolic Space Abstract Prototypicality is extensively studied in machine learning and computer vision. However, there is still no widely accepted definition of prototypicality. In this paper, we first propose to define prototypicality based on the concept of congealing. Then, we develop a novel method called HACK to automatically discover prototypical examples from the dataset. HACK conducts unsupervised prototypicality learning in Hyperbolic space with sphere pACKing. HACK first generates uniformly packed particles in the Poincaré ball of hyperbolic space and then assigns the image uniquely to each particle. Due to the geometrical property of hyperbolic space, prototypical examples naturally emerge and tend to locate in the center of the Poincaré ball. HACK naturally leverages hyperbolic space to discover prototypical examples in a data-driven fashion. We verify the effectiveness of the method with synthetic dataset and natural image datasets. Extensive experiments show that HACK can naturally discover the prototypical examples without supervision. The discovered prototypical examples and atypical examples can be used to reduce sample complexity and increase model robustness. 1 INTRODUCTION Not all instances are created equal. Some instances are more representative of the class and some instances are outliers or anomalies. Representative examples can be viewed as prototypes and used for interpretable machine learning (Bien & Tibshirani, 2011), curriculum learning (Bengio et al., 2009) and learning better decision boundaries (Carlini et al., 2018). With prototypical examples, we can also conduct classification with few or even one example (Miller et al., 2000). Given an image dataset, thus it is desirable to organize the examples based on prototypicality. If the features of the images are given, it is relatively easy to find the prototypes by examining the density peaks of the feature distribution. If the features are not given, to discover prototypical examples without supervision is difficult: there is no universal definition or simple metric to assess the prototypicality of the examples. A naive method to address this problem is to examine the gradient magnitude (Carlini et al., 2018). However, this approach is shown to have a high variance which is resulted from different training setups (Carlini et al., 2018). Some methods address this problem from the perspective of adversarial robustness (Stock & Cisse, 2018; Carlini et al., 2018): prototypical examples should be more adversarially robust. However, the selection of the prototypical examples highly depends on the adversarial method and the metric used in adversarial attack. Several other methods exist for this problem but they are either based on heuristics or lack a proper justification (Carlini et al., 2018). In this paper, we first introduce a way of obtaining prototypical examples from image congealing (Miller et al., 2000). Congealing is the process of jointly aligning a set of images. The congealed images are transformed to better align with the average image and thus more typical. We further propose a novel method, called HACK, by leveraging the geometry of hyperbolic space for unsupervised learning. Hyperbolic space is non-Euclidean space with constant non-negative curvature Anderson (2006). Different from Euclidean space, hyperbolic space can represent hierarchical relation with low distortion. Poincaré ball model is one of the most commonly used models for hyperbolic space (Nickel & Kiela, 2017b). One notable property of Poincaré ball model is that the distance to the origin grows exponentially as we move towards the boundary. Thus, the points located in the center of the ball are close to all the other points while the points located close to the boundary are infinitely Figure 1: Different from the existing unsupervised learning methods which aim to group examples via semantic similarity, HACK organizes images in hyperbolic space in a hierarchical manner. The typical images are at the center of the Poincaré ball and the atypical images are close to the boundary of the Poincaré ball. far away from other points. With unsupervised learning in hyperbolic space, HACK can learn features which capture both visual similarity and prototypicality(Figure 1). HACK optimizes the organization of the dataset by assigning the images to a set of uniformly distributed particles in hyperbolic space. The assignment is done by minimizing the total hyperbolic distance between the image features and the particles via Hungarian algorithm. The prototypicality arises naturally based on the distance of the example to other examples. Prototypical examples tend to locate in the center of the Poincaré ball and atypical examples tend to locate close to the boundary. Hyperbolic space readily facilitates such an organization due to property of the hyperbolic distance. In summary, the contributions of the papers are, • We propose the first unsupervised feature learning method to learn features which capture both visual similarity and prototypicality. The positions of the features reflect prototypicality of the examples. • The proposed method HACK assigns images to particles that are uniformly packed in hyperbolic space. HACK fully exploits the property of hyperbolic space and prototypicality arises naturally. • We ground the concept of prototypicality based on congealing which conforms to human visual perception. The congealed examples can be used to replace the original examples for constructing datasets with known prototypicality. We validate the effectiveness of the method by using a synthetic data with natural and congealed images. We further apply the proposed method to commonly used image datasets to reveal prototypicality. • The discovered prototypical and atypical examples are shown to reduce sample complexity and increase robustness of the model. 2 RELATED WORK Prototypicality. The study of prototypical examples in machine learning has a long history. In Zhang (1992), the authors select typical instances based on the fact that typical instances should be representative of the cluster. In Kim et al. (2016), prototypical examples are defined as the examples that have minimum maximum mean discrepancy within the data. Li et al. (Li et al., 2018) propose to discover prototypical examples by architectural modifications: the dataset is first projected onto a low-dimensional manifold and a prototype layer is used to minimize the distance between inputs and the prototypes on the manifold. The robustness to adversarial attacks are also used as a criteria for prototypicality (Stock & Cisse, 2018). In Carlini et al. (2018), the authors propose multiple metrics for prototypicality discovery. For example, the features of prototypical examples should be consistent across different training setups. However, these metrics usually depend heavily on the training setups and hyperparameters used for training. The idea of prototypicality is also extensively studied in meta-learning for one-shot or few-shot classification (Snell et al., 2017). No existing works address the prototypicality discovery problem in a data-driven fashion. Our proposed HACK naturally exploits hyperbolic space to organize the images based on prototypicality. Unsupervised Learning in Hyperbolic Space. Learning features in hyperbolic space has shown to be useful for many machine learning problems (Nickel & Kiela, 2017a; Ganea et al., 2018). One useful property is that hierarchical relations can be embedded in hyperbolic space with low distortion (Nickel & Kiela, 2017a). A generalized version of the normal distribution called wrapped normal distribution is proposed for modeling distribution of points in hyperbolic space (Nagano et al., 2019). The proposed wrapped normal distribution is used as the latent space for constructing hyperbolic variational autoencoders (VAEs) (Kingma & Welling, 2013). Poincaré VAEs is constructed in Mathieu et al. (2019) with a similar idea to Nagano et al. (2019) by replacing the standard normal distribution with hyperbolic normal distribution. Unsupervised 3D segmentation (Hsu et al., 2020) and instance segmentation (Weng et al., 2021) are conducted in hyperbolic space via hierarchical hyperbolic triplet loss. CO-SNE (Guo et al., 2021a) is recently proposed to visualize high-dimensional hyperbolic features in a two-dimensional hyperbolic space. Although hyperbolic distance facilitates the learning of hierarchical structure, how to leverage hyperbolic space for unsupervised prototypicality discovery is not explored in the current literature. Sphere Packing. The problem of sphere packing is to pack a set of particles as densely as possible in a space (Conway & Sloane, 2013). Sphere packing can be served as a toy model for granular materials and has applications in information theory (Shannon, 2001) to find error-correcting codes (Cohn, 2016). Sphere packing is difficult due to multiple local minima, the curse of high-dimensionality and complicated geometrical configurations. Packing in hyperbolic space is also studied in the literature. It is given in Böröczky (1978) a universal upper bound for the density of sphere packing in an n-dimensional hyperbolic space when n ≥ 2. We are interested in generating uniform packing in a two-dimensional hyperbolic space. Uniformity has been shown to be a useful criterion for learning good features on the hypersphere (Wang & Isola, 2020). We opt to find the configuration with an optimization procedure which is easily applicable even with thousands of particles. 3 OVERVIEW Given existing features {f(vi)} which are obtained by applying a feature extractor for each instance vi, we can find the prototypical examples by examining the density peaks via techniques from density estimation. For example, the K-nearest neighbor density (K-NN) estimation (Fix & Hodges, 1989) is defined as, pknn(vi, k) = k n 1 Ad ·Dd(vi, vk(i)) (1) where d is the feature dimension, Ad = πd/2/Γ(d/2+1), Γ(x) is the Gamma function and k(i) is the kth nearest neighbor of example vi. The nearest neighbors can be found by computing the distance between the features. However, different training setups can induce different feature spaces, which in turn lead to different conclusions of prototypicality. Our goal is to learn features that naturally reflect prototypicality of the examples. We ground our concept of prototypicality based on congealing (Miller et al., 2000). In particular, we define prototypical examples in the pixel space by examining the distance of the images to the average image in the corresponding class. Our idea is based on a traditional computer vision technique called image alignment (Szeliski et al., 2007) which aims to find correspondences across images. During congealing (Miller et al., 2000), a set of images are transformed to be jointly aligned by minimizing the joint pixelwise entropies. The congealed images are more prototypical: they are better aligned with the average image. Thus, we have a simple way to transform an atypical example to a typical example (see Figure 2). This is useful since given an unlabeled image dataset the typicality of the examples are unknown, congealing examples can be naturally served as examples with known typicality and be used as a validation for the effectiveness of our method. 4 UNSUPERVISED FEATURE REPRESENTATION IN HYPERBOLIC SPACE We aim to develop a method which can automatically discover prototypical examples unsupervisedly. In particular, we conduct unsupervised learning in hyperbolic space with sphere packing (Figure 5). We specify where the targets should be located ahead of training with uniform packing in hyperbolic space, which by design are maximally evenly spread out in hyperbolic space. The uniformly distributed particles guide feature learning to achieve maximum instance discrimination (Wu et al., 2018). HACK figures out which instance should be mapped to which target through bipartite graph matching as a global optimization procedure. During training HACK minimizes the total hyperbolic distances between the mapped image point (in the feature space) and the target, those that are more typical naturally emerge closer to the origin of Poincaré ball. Prototypicality comes for free as a result of self-organization. HACK differs from the existing learning methods in several aspects (Figure 3). Different from supervised learning, HACK allows the image to be assigned to any target (particle). This enables exploration of natural organizations of the data. Different from existing unsupervised learning learning method, HACK specifies a predefinted geometrical organization which encourages the corresponding structure to be emerged from the dataset. Existing methods are not applicable for prototypicality discovery without supervision due to their aforementioned limitations. Section 4.1 gives the background on hyperbolic space. Section 4.2 describes the steps for generating uniformly distributed particles in hyperbolic space. Section 4.3 delineates the details of hyperbolic instance assignment via Hungarian algorithm. 4.1 POINCARÉ BALL MODEL FOR HYPERBOLIC SPACE Hyperbolic space. Euclidean space has a curvature of zero and a hyperbolic space is a Riemannian manifold with a constant negative curvature. Poincaré Ball Model for Hyperbolic Space. There are several isometrically equivalent models for visualizing hyperbolic space with Euclidean representation. The Poincaré ball model is the commonly used one in hyperbolic representation learning (Nickel & Kiela, 2017b). The n-dimensional Poincaré ball model is defined as (Bn, gx), where Bn = {x ∈ Rn : ∥x∥ < 1} and gx = (γx)2In is the Riemannian metric tensor. γx = 21−∥x∥2 is the conformal factor and In is the Euclidean metric tensor. Hyperbolic Distance. Given two points u ∈ Bn and v ∈ Bn, the hyperbolic distance is defined as, dBn(u,v) = arcosh ( 1 + 2 ∥u− v∥2 (1− ∥u∥2)(1− ∥v∥2) ) (2) where arcosh is the inverse hyperbolic cosine function and ∥·∥ is the usual Euclidean norm. Figure 4: The proposed repulsion loss is used to generate uniformly packed particles in hyperbolic space. (a) If the distance between two particles are within rn,r, minimizing the repulsion loss would push the two particles away. (b) The repulsion loss is larger when the two particles become closer. (a) (b) Hyperbolic distance has the unique property that it grows exponentially as we move towards the boundary of the Poincaré ball. In particular, the points on the circle represents points in the infinity. Hyperbolic space is naturally suitable for embedding hierarchical structure (Sarkar, 2011; Nickel & Kiela, 2017b) and can be regarded as a continuous representation of trees (Chami et al., 2020). The hyperbolic distance between samples implicitly reflects their hierarchical relation. Thus, by embedding images in hyperbolic space we can naturally organize images based on their semantic similarity and prototypicality. 4.2 SPHERE PACKING IN HYPERBOLIC SPACE Given n particles, our goal is to pack the particles into a two-dimensional hyperbolic space as densely as possible. We derive a simple repulsion loss function to encourage the particles to be equally distant from each other. The loss is derived via the following steps. First, we need to determine the radius of the Poincaré ball used for packing. We use a curvature of 1.0 so the radius of the Poincaré ball is 1.0. The whole Poincaré ball cannot be used for packing since the volume is infinite. We use r < 1 to denote the actual radius used for packing. Thus, our goal is to pack n particles in a compact subspace of Poincaré ball. Then, the Euclidean radius r is further converted into hyperbolic radius rB. Let s = 1√ c , where c is the curvature. The relation between r and rB is rB = s log s+rs−r . Next, the total hyperbolic area AB of a Poincaré ball of radius rB can be computed as AB = 4πs2 sinh2( rB2s ), where sinh is the hyperbolic sine function. Finally, the area per point An can be easily computed as ABn , where n is the total number of particles. Given An, the radius per point can be computed as rn = 2s sinh −1( √ An 4πs2 ). We use the following loss to generate uniform packing in hyperbolic space. Given two particles i and j, the repulsion loss V is defined as, V (i, j; k, n, r) = { 1 [2rn −max(0, 2rn − dB(i, j))]k − 1 (2rn)k } · C(k) (3) where C(k) = (2rn) k+1 k and k is a hyperparameter. Intuitively, if the particle i and the particle j are within 2rn, the repulsion loss is positive. Minimizing the repulsion loss would push the particle i and j away. If the repulsion is zero, this indicates all the particles are equally distant (Figure 4 a). Figure 4 b) shows that the repulsion loss grows significantly when the two particles become close. We also adopt the following boundary loss to prevent the particles from escaping the ball, B(i; r) = max(0, normi − r + margin) (4) where normi is the ℓ2 norm of the representation of the particle i. Figure 3 b) shows an example of the generated particles that are uniformly packed in hyperbolic space. 4.3 HYPERBOLIC INSTANCE ASSIGNMENT HACK learns the features by optimizing the assignments of the images to the particles (Figure 5). Once we generate a fixed set of uniformly packed particles in a two-dimensional hyperbolic space, our next goal is to assign each image to the corresponding particle. The assignment should be one-to-one, that is, each image should be assigned to one particle and each particle is allowed to be associated with only one image. We cast the instance assignment problem as a bipartite matching problem (Gibbons, 1985) and solve it Hungarian algorithm (Munkres, 1957). Figure 5: HACK conducts unsupervised learning in hyperbolic space with sphere packing. The images are mapped to particles by minimizing the total hyperbolic distance. HACK learns features that can capture both visual similarities and prototypicality. Algorithm 1 HACK: Unsupervised Learning in Hyperbolic Space. Require: # of images: n ≥ 0. Radius for packing: r < 1. An encoder with parameters θ: fθ 1: Generate uniformly distributed particles in hyperbolic space by minimizing the repulsion loss in Equation 3 2: Given {(x1, s1), (x2, s2), ..., (xb, sb)}, optimize fθ by minimizing the total hyperbolic distance via Hungarian algorithm. Initially, we randomly assign the particles to the images, thus there is a random one-to-one correspondence between the images to the particles (not optimized). Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle, and an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. We aim to find the minimum cost bipartite matching of the images to the particles within this batch. It is worth noting that no labels are needed and the assignment is done without supervision. In the bipartite matching, the cost is the hyperbolic distance of each image to the particle. Thus, the criterion is to minimize the total hyperbolic distances of the assignment. We achieve this goal with Hungarian algorithm Munkres (1957) which has a complexity of O(b3), where b is the batch size. It is worth noting that the assignment is only limited to the samples in the particular batch, thus the time and memory complexity is tolerable. The one-to-one correspondence between the images and particles are always maintained during training. The details of HACK is shown in Algorithm 1. Due to the property of hyperbolic distance, the images that are more typical tend to be assigned to the particles located in the center of the Poincaré ball. Thus, HACK implicitly defines prototypicality as the distance of the sample to all the other samples. The prototypicality of the images can be easily reflected by the location of the assigned particles. Moreover, similar images tend to cluster together due to semantic similarity. In summary, with hyperbolic instance assignment, HACK automatically organizes images based on prototypicality by exploiting hyperbolicity of the space. Why Does HACK Work? Hyperbolic space can embed tree structure with no distortion. In particular, the root of the tree can be embedded in the center of of the Poincaré ball and the leaves are embedded close to the boundary. Thus, the root is close to all the other nodes. This agrees with our intuition that typical examples should be close to all other examples. By minimizing the total assignment loss of the images to the particles, we seek to organize the images implicitly in a tree-structure manner. Consider three images A, B, C for an example. Assume image A is the most typical image. Thus the feature of A is close to both the features of B and C. The bipartite matching tends to assign image A to the particle in the center since this naturally reflects the feature distances between the three images. Connection to Existing Methods. Existing works address the problem of prototypicality discovery with ad-hoc defined metrics (Carlini et al., 2018). These metrics usually have high-variances due to different training setups or hyperparameters. In this paper, we take a different perspective by exploiting the natural organization of the data by optimizing hyperbolic instance assignment. The property of hyperbolic space facilitates discovery of prototypicality. Also, popular contrastive learning based unsupervised learning methods such as SimCLR (Chen et al., 2020) and MoCo (He et al., 2020) cannot achieve this goal since the predefined structure is not specified. 5 EXPERIMENTS We design several experiments to show the effectiveness of HACK for semantic and prototypical organization. First, we first construct a dataset with known prototypicality using the congealing algorithm (Miller et al., 2000). Then, we apply HACK to datasets with unknown prototypicality to organize the samples based on the semantic and prototypical structure. Finally, we show that the prototypical structure can be used to reduce sample complexity and increase model robustness. 5.1 DATASETS We first construct a dataset called Congealed MNIST. To verify the efficacy of HACK for unsupervised prototypicality discovery, we need a benchmark with known prototypical examples. However, currently there is no standard benchmark for this purpose. To construct the benchmark, we use the congealing algorithm from Miller et al. (2000) to align the images in each class of MNIST (LeCun, 1998). The congealing algorithm is initially used for one-shot classification. During congealing, the images are brought into correspondence with each other jointly. The congealed images are more prototypical: they are better aligned with the average image. In Figure 2, we show the original images and the images after congealing. The original images are transformed via affine transformation to better align with each other. The synthetic data is generated by replacing 500 original images with the corresponding congealed images. In Section E of the Appendix, we show the results of changing the number of replaced original images. We expect HACK to discover the congealed images and place them in the center of the Poincaré ball. We also aim to discover the prototypical examples from each class of the standard MNIST dataset (LeCun, 1998) and CIFAR10 (Krizhevsky et al., 2009). CIFAR10 consists of 60000 from 10 object categories ranging from airplane to truck. CIFAR10 is more challenging than MNIST since it has larger intra-class variations. 5.2 BASELINES We consider several existing metrics proposed in Carlini et al. (2018) for prototypicality discovery, the details can be found in Section C of the Appendix. Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. 5.3 IMPLEMENTATION DETAILS We implement HACK in Pytorch and the code will be made public. To generate the uniform particles, we first randomly initialize the particles. We run the training for 1000 epochs to minimize the repulsion loss and boundary loss. The learning rate is 0.01. The curvature of the Poincaré ball is 1.0 and the r is 0.76 which is used to alleviate the numerical issues (Guo et al., 2021b). The hyperparameter k is 1.55 which is shown to generate uniform particles well. For the assignment, we use a LeNet (LeCun et al., 1998) for MNIST and a ResNet20 (He et al., 2016) for CIFAR10 as the encoder. We apply HACK to each class separately. We attach a fully connected layer to project the feature into a two-dimensional Euclidean space. The image features are further projected onto hyperbolic space via an exponential map. We run the training for 200 epochs and the initial learning rate is 0.1. We use a cosine learning rate scheduler (Loshchilov & Hutter, 2016). We optimize the assignment every other epoch. All the experiments are run on a NVIDIA TITAN RTX GPU. 5.4 PROTOTYPICALITY DISCOVERY ON CONGEALED MNIST Figure 6 shows that HACK can discover the congealed images from all the images. In Figure 6 a), the red particles denote the congealed images and cyan particles denote the original images. We can observe that the congealed images are assigned to the particles that locate in the center of the Poincaré ball. This verifies that HACK can indeed discover prototypical examples from the original dataset. Section G.1 in the Appendix shows that during training the features of atypical examples gradually move to the boundary of the Poincaré ball. In Figure 6 b), we show the actual images that are embedded in the two-dimensional hyperbolic space. We can observe that the images in the center of Poincaré ball are more prototypical and images close to the boundary are more atypical. Also, the images are naturally organized by their semantic similarity. Figure 7 shows that the features of the original images become closer to the center of Poincaré ball after congealing. In summary, HACK can discover prototypicality and also organizes the images based on their semantics. To the best of our knowledge, this is the first unsupervised learning method that can be used to discover prototypical examples in a data-driven fashion. 5.5 RESULTS ON STANDARD BENCHMARKS Figure 8 shows the embedding of class 0 from MNIST and class “airplane” from CIFAR10 in the hyperbolic space. We sample 2000 images from MNIST and CIFAR10 for better visualization. We also show the arrangement of the images angularly with different angles. Radially, we can observe that images are arranged based on prototypicality. The prototypical images tend to locate in the center of the Poincaré ball. Especially for CIFAR10, the images become blurry and even unrecognizable as we move towards the boundary of the ball. Angularly, the images are arranged based on visual similarity. The visual similarity of images has a smooth transition as we move around angularly. Please see Section D for more results. Comparison with Baselines Figure 11 shows the comparison of the baselines with HACK. We can observe that both HACK and Model Confidence (MC) can discover typical and atypical images. Compared with MC, HACK defines prototypicality as the distance of the sample to other samples which is more aligned with human intuition. Moreover, in addition to prototypicality, HACK can also be used to organize examples by semantic similarities. Holdout Retraining (HR) is not effective for prototypicality discovery due to the randomness of model training. 5.6 APPLICATION OF PROTOTYPICALITY Reducing Sample Complexity. The proposed HACK can discover prototypical images as well as atypical images. We show that with atypical images we can reduce the sample complexity for training the model. Prototypical images are representative of the dataset but lack variations. Atypical examples contain more variations and it is intuitive that models trained on atypical examples should generalize better to the test samples. To verify this hypothesis, we select a subset of samples based on the norm of the features which indicates prototypicality of the examples. We consider using both the most typical and atypical examples for training the model. We train a LeNet on MNIST for 10 epochs with a learning rate of 0.1. Figure 9 a) shows that training with atypical images can achieve much higher accuracy than training with typical images. In particular, training with the most atypical 10% of the images achieves 16.54% higher accuracy than with the most typical 10% of the images. Thus, HACK provides an easy solution to reduce sample complexity. The results further verify that HACK can distinguish between prototypical and atypical examples. Increasing Model Robustness. Training models with atypical examples can lead to vulnerable model to adversarial attacks (Liu et al., 2018; Carlini et al., 2018). Intuitively, atypical examples lead to less smooth decision boundary and a small perturbation to the example is likely to change the prediction. With HACK, we can easily identify atypical samples to improve the robustness of the model. We use MNIST as the benchmark and use FGSM (Goodfellow et al., 2014) to attack the model with an ϵ = 0.07. We identify the atypical examples with HACK and remove the most atypical X% of the examples. Figure 9 b) shows that discarding atypically examples greatly improve the robustness of the model: the adversarial accuracy is improved from 84.72% to 93.42% by discarding the most atypical 1% of the examples. It is worth noting that the clean accuracy remains the same after removing a small number of atypical examples. 6 SUMMARY We propose an unsupervised learning method, called HACK, for organizing images with sphere packing in hyperbolic space. HACK optimizes the assignments of the images to a fixed set of uniformly distributed particles. Prototypical and semantic structures emerge naturally due to the property of hyperbolic distance. We apply HACK to synthetic data with known prototypicality and standard image datasets. The discovered prototypicality and atypical examples can be used to reduce sample complexity and increase model robustness. A APPENDIX B MORE DETAILS ON HYPERBOLIC INSTANCE ASSIGNMENT A more detailed description of the hyperbolic instance assignment is given. Initially, we randomly assign the particles to the images. Given a batch of samples {(x1, s1), (x2, s2), ..., (xb, sb)}, where xi is an image and si is the corresponding particle. Given an encoder fθ, we generate the hyperbolic feature for each image xi as fθ(xi) ∈ B2, where B2 is a two-dimensional Poincaré ball. we aim to find the minimum cost bipartite matching of the images to the particles. The cost to minimize is the total hyperbolic distance of the hyperbolic features to the particles. We first compute all the pairwise distances between the hyperbolic features and the particles. This is the cost matrix of the bipartite graph. Then we use Hungarian algorithm to optimize the assignment (Figure 12). Suppose we train the encoder fθ for T epochs. We run the hyperbolic instance assignment every other epoch to avoid instability during training. We optimize the encoder fθ to minimize the hyperbolic distance of the hyperbolic feature to the assigned particle in each batch. C DETAILS OF BASELINES Holdout Retraining: We consider the Holdout Retraining proposed in Carlini et al. (2018). The idea is that the distance of features of prototypical example obtained from models trained on different datasets should be close. In Holdout Retraining, multiple models are trained on the same dataset. The distances of the features of the images obtained from different models are computed and ranked. The prototypical examples are those examples with closest feature distance. Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for prototypicality. Once we train a model on the dataset, we use the confidence of the model to rank the examples. The prototypical examples are those examples that the model is most D MORE RESULTS ON PROTOTYPICALITY DISCOVERY We show the visualization of all the images in Figure 17 and Figure 18. The images are organized naturally based their prototypicality and semantic similarity. We further conduct retrieval based on the norm of the hyperbolic features to extract the most typical and atypical images on CIAFR10 in Figure 19. The hyperbolic features with large norms correspond to atypical images and the hyperbolic features with small norms correspond to typical images. It can be observed that the object in the atypical images are not visible. E GRADUALLY ADDING MORE CONGEALED IMAGES We gradually increase the number of original images replaced by congealed images from 100 to 500. Still, as shown in Figure 13, HACK can learn representation that capture the concept of prototypicality regardless of the number of congealed images. This again confirms that the effectiveness of HACK for discovering prototypicality. 100 200 300 400 500 F DIFFERENT RANDOM SEEDS We further run the assignment for 5 times with different random seeds. The results are shown in Figure 14. We observe that the algorithm does not suffer from high variance and the congealed images are always assigned to the particles in the center of the Poincaré ball. This further confirms the efficacy of the proposed method for discovering prototypicality. G EMERGENCE OF PROTOTYPICALITY IN THE FEATURE SPACE Existing unsupervised learning methods mainly focus on learning features for differentiating different classes or samples Wu et al. (2018); He et al. (2020); Chen et al. (2020). The learned representations are transferred to various downstream tasks such as segmentation and detection. In contrast, the features learned by HACK aim at capturing prototypicality within a single class. To investigate the effectiveness of HACK for revealing prototypicality, we can include or exclude congealed images in the training process. When the congealed images are included in the training process, we expect the congealed images to be located in the center of the Poincaré ball while the original images to be located near the boundary of the Poincaré ball. When the congealed images are excluded from the training process, we expect the features of congealed images produced via the trained network are located in the center of the Poincaré ball. G.1 TRAINING WITH CONGEALED IMAGES AND ORIGINAL IMAGES We follow the same setups as in the Section 4.3.1 of the main text. Figure 15 shows the hyperbolic features of the congealed images and original images in different training epochs. The features of the congealed images stay in the center of the Poincaré ball while the features of the original images gradually expand to the boundary. G.2 TRAINING ONLY WITH ORIGINAL IMAGES Figure 16 shows the hyperbolic features of the congealed images when the model is trained only with original images. As we have shown before, congealed images are naturally more typical than their corresponding original images since they are aligned with the average image. The features of congealed images are all located close to the center of the Poincaré ball. This demonstrate that prototypicality naturally emerge in the feature space. Without using congealed images during training, we exclude any artifacts and further confirm the effectiveness of HACK for discovering prototypicality. We also observe that the features produced by HACK also capture the fine-grained similarities among the congealing images despite the fact that all the images are aligned with the average image. H DISCUSSIONS ON SOCIETAL IMPACT AND LIMITATIONS. We address the problem of unsupervised learning in hyperbolic space. We believe the proposed HACK should not raise any ethical considerations. We discuss current limitations below, Applying to the Whole Dataset Currently, HACK is applied to each class separately. Thus, it would be interesting to apply HACK to all the classes at once without supervision. This is much more challenging since we need to differentiate between examples from different classes as well as the prototypical and semantic structure. Exploring other Geometrical Structures We consider uniform packing in hyperbolic space to organize the images. It is also possible to extend HACK by specifying other geometrical structures to encourage the corresponding organization to emerge from the dataset.
1. What is the focus of the paper regarding unsupervised representation learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application and experimental results? 3. Do you have any concerns or suggestions regarding the paper's discussion and comparisons with other related works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an unsupervised representation learning method in the hyperbolic space to discover prototypical examples in a dataset. The authors show that these examples can be used to reduce the sample complexity and increase the adversarial robustness. Strengths And Weaknesses [Strengths] The motivation of using hyperbolic space to discover prototypical examples is natural and reasonable. The study of unsupervised hyperbolic representation learning is interesting. The idea of using sphere packing and hyperbolic loss is relatively new. [Weaknesses] The paper misses the discussion of the studies of representation learning on images, for example [1, 2, 3]. I'm not fully convinced by the experimental results in the application section of the prototypicality. It is not clear to me why training on atypical images is more effective than the typical images instead of the other way around. The authors explain that prototypical images lack variations but atypical examples contain more variations. However, some recent studies have shown that with only 10 samples (no variation), they can still achieve decent test accuracy. The correlation between the prototypicality and feature norms is not convincing to me. In [2], the authors show that when training in a hyperbolic space, the ambiguous images instead of typical images often have smaller norms. It makes sense if that is true since training on the ambiguous images is less effective than training on more deterministic images. The authors only show results on the MNIST and CIFAR-10 where by looking at the images in the qualitative results, it is hard to decide whether they are more typical or not. Probably it is better to experiment also on CIFAR-100 or STL-10 or ImageNet-100 datasets, which also require not too much computation. The other benefit is that, models trained on CIFAR-100 is known to be adversarial vulnerable due to the label noise. Therefore, removing atypical could show more benefit on it. [1] Yan, Jiexi, et al. "Unsupervised hyperbolic metric learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [2] Khrulkov, Valentin, et al. "Hyperbolic image embeddings." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [3] Liu, Shaoteng, et al. "Hyperbolic visual embedding learning for zero-shot recognition." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. [4] Cazenavette, George, et al. "Dataset distillation by matching training trajectories." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Clarity, Quality, Novelty And Reproducibility [Clarity] There are some parts in the method section that are confusing to me. For instance, how is the assignment of images to particles updated? Is it updated for images in each batch? If so, why the experiment section says it is updated every other epoch? [Quality] The paper can be further improved with further proof-reading. The notation r n , r in the caption of Figure 4 is never used in the main body. Should it be 2 r n ? TODO in the caption of Figure 8. [Novelty] There are some works on hyperbolic representation learning on images, but they are not discussed in the paper. [Reproducibility] The code is not provided in the supplementary material. Some hyperparameters are not given in the experiment section, e.g. margin in Equation (4).
ICLR
Title Understanding the Asymptotic Performance of Model-Based RL Methods Abstract In complex simulated environments, model-based reinforcement learning methods typically lag the asymptotic performance of model-free approaches. This paper uses two MuJoCo environments to understand this gap through a series of ablation experiments designed to separate the contributions of the dynamics model and planner. These reveal the importance of long planning horizons, beyond those typically used. A dynamics model that directly predicts distant states, based on current state and a long sequence of actions, is introduced. This avoids the need for many recursions during long-range planning, and thus is able to yield more accurate state estimates. These accurate predictions allow us to uncover the relationship between model accuracy and performance, and translate to higher task reward that matches or exceeds current state-of-the-art model-free approaches. 1 INTRODUCTION Model-based reinforcement learning (MBRL) has many potential benefits over model-free approaches. These include (i) the ability to generalize to new tasks in the environment, without having to retrain; (ii) learning from off-policy data and (iii) sample efficiency. However, in simulated environments where data is plentiful, model-based approaches struggle to approach the asymptotic performance of model-free methods Nagabandi et al. (2017); Pong et al. (2018); Chua et al. (2018). Several possible explanations present themselves: the planner used for selecting optimal actions under the model might be insufficiently powerful; the model might not be able to accurately model the dynamics; or the planning horizon might not be long enough. This paper address these questions by teasing apart the different factors involved in an MBRL framework, applied to two deterministic MuJoCo environments (Todorov et al., 2012), with the aim of understanding the gap in asymptotic performance with respect to model-free approaches. In particular, we demonstrate that bias caused by short planning horizons and poor accuracy of long-term predictions is the cause of the poor performance of existing MBRL methods in the unlimited-sample regime. Our experiments show that, with a perfect dynamics model, the optimal planing horizon can be over 100 steps – much longer than typically considered in many MBRL approaches. Correspondingly, the performance is typically limited by the ability of the model to accurately predict over long-time scales, not just a few time-steps. Existing approaches to MBRL rely on a single-step dynamics model that predict the next state, given the current state and an action. As can be see in Figure 5, over long time-horizons the errors compound due to recursive application of the model, yielding inaccurate state estimates which are not useful for planning. Instead, we propose an alternate form of dynamics model that takes as input a sequence of actions along with the current state and directly predicts many time-steps into the future. This approach provides accurate prediction over long time horizons, allowing us to uncover the relationship between model accuracy and performance. This reveals that MBRL with sufficiently good learned models matches or exceeds the performance of state-of-the-art model-free methods. 1.1 RELATED WORK Non-Parametric Model-Based RL: Gaussian processes are popular approach to modeling nonlinear dynamics due to their low sample complexity and their ability to explicitly represent epistemic uncertainty. Consequently, numerous MBRL approaches use them, e.g. Kocijan et al. (2004); Ko et al. (2007); Grancharova et al. (2008); Deisenroth & Rasmussen (2011); Deisenroth et al. (2014). However via the choice of kernel they impose potentially unrealistic smoothness constraints and do not scale to large data settings, limiting their asymptotic performance in practice. Combining model-based and model-free methods: Due to the sample efficiency of model-based methods and superior asymptotic performance of model-free methods, several works have proposed to learn dynamics models using a few trajectory samples, then use those models to train or augment a model-free policy. The classic Dyna algorithm (Sutton, 1990) uses a model to extend Bellman updates multiple steps. Deisenroth & Rasmussen (2011) learns a Gaussian process model of the dynamics function and uses it to train an RBF network policy, and Gal et al. (2016) enables the model to scale to larger data by using Bayesian neural networks in place of GPs. Levine et al. (2016) fits a time-varying locally linear model around a trajectory, then trains a neural network policy to follow trajectories found by iLQR (Todorov & Li, 2005). Silver et al. (2016) learns an implicit model of the dynamics for implicit planning via value estimation; in an inversion of this technique, Pong et al. (2018) learn an explicit model of Q values for explicit planning via constrained optimization. Weber et al. (2017) learns a neural network dynamics model which is unrolled inside a policy to inform an actor-critic agent. Nagabandi et al. (2017) trains a neural network dynamics model on control tasks and uses it to take actions, then uses that model-based policy to speed the training of a model-free policy via imitation learning. These works largely seek to either (i) augment a modelfree method with a model for faster learning, or (ii) make up for the asymptotic deficiencies of a model-based method by transitioning to model-free. In this work we instead directly investigate the causes of MBRL’s poor asymptotic performance with the aim of making a transition to model-free unnecessary. MBRL with neural network models: The idea of using neural networks to enable model-based control of nonlinear systems goes back decades (Miller et al., 1990; Schmidhuber, 1990; Hunt et al., 1992; Bekey & Goldberg, 2012; Draeger et al., 1995), but until recently has only seen significant success on systems with relatively simple dynamics. Several works have endeavored to use neural network generative models of images for model-based control (Wahlström et al., 2015; Watter et al., 2015; Finn & Levine, 2017); these policies have typically used short planning horizons and struggled to equal model-free performance on complex tasks. Lenz et al. (2015) learn recurrent neural network dynamics models, then use backpropagation through time to select actions and control a robotic arm, and Henaff et al. (2017) extend this concept to both discrete and continuous action spaces. Clavera et al. (2018) combine meta-learning with MBRL using neural network models to rapidly adapt to novel environments. Srinivas et al. (2018) uses imitation learning to train a model to plan by gradient descent, which relies on an existing expert rather than learning from scratch. The closest work to ours is Chua et al. (2018). This follows a similar recipe, with similar planning and constructing a dataset online, but with different models and different goals. We use deterministic neural networks which predict many steps into the future to understand the impact of model- and horizon-bias on asymptotic performance of MBRL methods on long-horizon problems. Chua et al. (2018) uses a bootstrapped ensemble of probabilistic neural networks to improve the performance of MBRL in the few-sample regime. While that work achieves strong performance on short-horizon tasks, in our experiments we find it struggles to equal model-free methods on tasks with very long horizons. 2 APPROACH In this section we describe the models used in our experiments, the action-conditional predictor (ACP) and the novel plan-conditional predictor (PCP), which predicts the outcome of a sequence of actions with a single model step. We then detail the framework we use for planning with and training these models. 2.1 NOTATION We denote states and actions at a time t by st and at. In the environments we consider, both st and at are continuous vectors. We use H to refer to the planning horizon of an MPC policy. We refer to a sequence of actions as a plan; a plan constructed with horizon H is thus p = {a1, ..., aH}. In a set of plans {p1, ..., pn}, aij refers to the jth action of the ith plan. We consider models which predict a future state given the current state and one or more actions. We denote R to be the range of such a model, which is the number of steps this model predicts in a single application: fR(st, at, . . . , at+R−1; θ) = s̃t+R, for parameters θ. We apply the model recursively using the notation FT (st, at, . . . , aT−1; θ) = fR(. . . fR(fR(st, at, . . . , at+R−1; θ), at+R, . . . , at+2R−1; θ) . . . , aT−R, . . . , aT−1; θ) = s̃T . That is, FT () applies fR() recursively T/R times. 2.2 PLAN-CONDITIONAL PREDICTORS To test our conjecture that compounding errors limit the asymptotic performance of existing models on long-horizon RL tasks, we propose plan-conditional predictors (PCPs). A PCP takes the form fR(st, at, . . . , at+R−1) = s̃t+R for some range R > 1. If R = 1, then it reduces to the standard approach (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Henaff et al., 2017; Nagabandi et al., 2017; Chua et al., 2018) which predicts only a single time-step at a time, and which we call an action-conditional predictor (ACP). As shown in Figure 1, a PCP can predict H steps into the future using H/R recursive applications of the model instead of H applications required by an actionconditional predictor. There are many possible parameterizations that could be used in a PCP. In this work we choose deep fully-connected neural networks. There are several reasons for this: (i) the space of inputs grows exponentially with the range R, thus models with high capacity are needed to minimize model bias; (ii) since the goal is to understand asymptotic performance, a large data regime is assumed and sample complexity is a secondary issue; (iii) by predicting R steps with a single network application, they are extremely fast at planning time. An obvious alternate parameterization is recurrent neural networks (RNNs), and we provide experimental comparisons between the two approaches in Section 3. For both Swimmer and HalfCheetah, the ACP and PCP networks consist of fully-connected networks with 9 hidden layers with 1000 units each using the SELU activation function (Klambauer et al., 2017). The input state and action(s) are concatenated before being used as input to the network. The same loss function is used for training PCPs and ACPs, namely ‖s̃t+R − st+R‖22, where s is the raw MuJoCo state. We assume that the environment forms a Markov decision process (MDP) (Bellman, 1957) with deterministic dynamics, properties shared by the tasks considered in this work. These assumptions allow us to focus exclusively on the significance of model fidelity and planning horizon to MBRL, but removing them is an interesting direction for future work. 2.2.1 INTERMEDIATE PREDICTIONS For visualization purposes, in some experiments we use a variant of a plan-conditional predictor which takes as input a state and a variable number of actions R′ where 1 ≤ R′ ≤ R, the remainder of action inputs being set to zero. This allows us to plot error or render video of the PCP’s predictions at each intermediate timestep instead of only at multiples of R. Furthermore, this variant of the model allows for planning based on reward functions which operate at each timestep rather than just at the end of the episode (see Section 2.3.1). As such, this variant of the model is applicable to any environment. 2.3 SELECTING OPTIMAL ACTIONS In order to turn a predictor into a policy, we employ an off-the-shelf planning approach, namely the cross-entropy method (CEM) (Botev, 2011), to find a plan that is optimal up to some horizonH . We take the first action from that plan and then replan, a technique known as model-predictive control (MPC) or receding-horizon control (Mayne & Michalska, 1990). 2.3.1 PLANNING WITH CROSS-ENTROPY METHOD Given a predictor F , a horizon H , and a reward function r, we would like to find an optimal plan: p∗t = argmax at,...,at+H−1 H−1∑ i=0 r(s̃t+i, at+i) | s̃t+i = FH(st, at, . . . , at+i−1) For both MuJoCo environments, the reward function is dominated by the distance traveled in the xdimension at each timestep1. Thus for planning purposes we can replace the original reward function with a sparse one which provides reward equal to the x-distance traveled at the end of the episode: r̂(st) = { st[x] if t == T 0 otherwise . We then substitute t +H for T and plan based on r̂(st+H); note this is identical to using the sum of x-progress at each timestep t...t +H . This reduces the form of the optimal plan under a predictor F to p∗t = argmax at,...,at+H−1 r̂(FH(st, at, . . . , at+H−1)) The cross-entropy method starts with a set of plans p drawn from a candidate distribution C. In continuous control tasks, sampling actions independently along a trajectory results in near-zero net motion. Therefore it is common to instead use correlated action noise for exploration or trajectory sampling, e.g. an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) as in DDPG (Lillicrap et al., 2015). We define the candidate C(·) distribution by the following sampling process: a0 ∼ U(−1, 1) at+1 = min(max(N (µ = at, σ = 0.2),−1), 1) i.e. the sampled actions are clamped to be in the range ±1 which are the limits of the action space. The overall planning framework is shown in Figure 2. After drawing an initial set of N plans {p1, . . . , pN} ∼ C(·), these are passed through the predictor F to estimate rewards r1, . . . , rN , which are used to rank the plans. The top K are then passed to a 2nd round (red box). Additionally, their mean and variance are computed2 and used as parameters for a Gaussian distribution from which N −K new plans are sampled (green box). The combined set of plans are then passed to the PCP to rank them. The output from the planner is the first action from the top-ranked plan (yellow box) at the final planning round, which is executed by the agent in the environment. The top K trajectories from the final round are used to seed the initial set of plans for replanning at the next timestep, after clipping the first action from each. In practice, we use 3 rounds of planning at each timestep, i.e. two rounds of resampling (the figure omits the 3rd round for clarity). In our experiments, we use N = 50 and K = 5. 1In all experiments we evaluate a policy using the original reward function from OpenAI Gym. This simplified reward function is exclusively used inside the policy. We found in experiments using the ground-truth dynamics as a model that planning with the true reward function instead made no significant difference on these tasks. 2Independently at each timestep and action dimension; µ(p1, . . . , pn) = { 1 n ∑ i a i j | j ∈ H } and σ(p1, . . . , pn) = {√ 1 n ∑ i ( aij − µ(a1...nj ) )2 | j ∈ H}. Ac#on   executed   p1   p2   pN   p1   p2   pN   C   Round  1   Round  2   F  pK   pK   N(μ,σ)   … ..   … ..   … ..   … ..   r1   r2   rN   rK  ~   ~   F   r1   r2   rN   rK   Sort   Time     step  1   Sort   Seed  next     #mestep   Figure 2: Our off-the-shelf planner, based on the cross-entropy method (Botev, 2011). See text for details. Algorithm 1 On-policy data aggregation and training Initialize dataset D with trajectories from random policy while not converged do θ ← argminΘ E (fR(st, at...t+R−1)− st+R)2 st,at...t+R−1,st+R∼D for m = 0...M do for t = 0...T do st ← Env.get observation() pt ← argmaxat...t+H−1 r̂(FH(st, at...t+H−1; θ)) at ← head(pt) Env.execute(at). end for D← D ⋃ (s0...T , a0...T−1) end for end while 2.4 ONLINE TRAINING The planning framework described above turns the PCP model into a policy which outputs an action at each time step. To train this policy, the underlying PCP model must be updated in an online fashion. This requires a dataset of trajectories {s0...T , a0...T−1} that covers the environment’s state-action space. We follow Nagabandi et al. (2017); Chua et al. (2018) and others and collect this dataset by alternating between fitting the model to the existing data and using our planning procedure (2.3.1) to generate more trajectories from the environment. We collect trajectories from the environment for M = 100 episodes. These trajectories are added to the training set and the PCP model is updated with SGD for 10 epochs using AMSGrad (Reddi et al., 2018). The overall procedure is detailed in Algorithm 1 and is essentially the standard template for MBRL (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Nagabandi et al., 2017; Chua et al., 2018). 3 EXPERIMENTS 3.1 EFFECT OF PLANNING HORIZON AND MODEL RANGE In this experiment we directly test our hypothesis that plan-conditional predictors are able to benefit from longer planning horizons than action-conditional predictors. Figure 3 shows that while an ACP model is competitive for planning horizons up to 20 timesteps, its performance falls substantially below the PCP models by 40 timesteps. The ACP model scores best at a horizon of 60 timesteps; beyond that its performance degrades as its predictions become unusably inaccurate. By contrast the PCP models shown, which need only be recursively applied between 3 and 20 times, all show monotonically increasing rewards as the planning horizon is increased. This reveals two things: the Swimmer task has a minimum optimal planning horizon of at least 100 timesteps, and the PCP models are able to predict with sufficient accuracy to be useful even over that long horizon. In the next experiments we tease apart the different factors of planning horizon, range, and accuracy that combine to produce these results. 3.2 PLANNING HORIZON VS REWARD Using a ground-truth model of the environment (that is, MuJoCo itself) in conjunction with our planner we can look at performance as a function of planning horizon for these tasks. That is, we define a new predictor MJC(st, at...t+T ) which can make predictions arbitrarily far into the future. This predictor works internally by creating a new copy env internal of the Gym environment. To make a prediction MJC(st, at...t+T ) = st+1, this predictor will call env internal.set state(st), then repeatedly call env internal.step(at+i) for i = 0...T . Its output is the observation after the final action has been executed. This ground-truth predictor can be used for planning like any other. The results of this experiment, shown in Figure 4, show that the optimal planning horizons for Swimmer and HalfCheetah are at around 150 and 40 timesteps, respectively. These results provide additional clarity to those presented in the previous experiment. Policies that use the groundtruth dynamics as their predictor perform better as the horizon increases due to the decreased bias of the long-horizon reward estimate. The approximate dynamics model from PCPs show similar gains. However, beyond a certain planning horizon the quasi-random search in the planner becomes less effective due to variance caused by the huge size of the search space, causing the reward to dip for H > 150. Previous work (Nagabandi et al., 2017; Chua et al., 2018) has demonstrated planning for 20 or 30 timesteps with a neural network dynamics model (and in particular, the recent Chua et al. (2018) achieves impressive scores on HalfCheetah). However, to our knowledge planning horizons above 50 timesteps remain untested. This leads us to investigate the accuracy at very long range prediction of traditional action-conditional predictors as well as our plan-conditional predictors. 3We found it necessary to handicap the ground-truth model on HalfCheetah by adding noise to its actions during planning to prevent the planner from breaking the simulation. Without this handicap the planner was able to find nonphysical strategies and achieves expected reward of up to 150,000. 3.3 MODEL RANGE VS ACCURACY With the evidence from Section 3.2 that some tasks require planning horizons up to 150 timesteps, we now evaluate action-conditional and plan-conditional predictors on their ability to make longrange predictions in MuJoCo. Additionally we compare to an RNN which predicts one step at a time, but which is trained with backpropagation through time (BPTT) to minimize prediction error across all timesteps. This RNN is trained with a curriculum of prediction lengths ranging from 1 (at the beginning of training) to 200 (at the end). To enable direct comparisons, we employ a fixed dataset of trajectories from the environment. We generate this dataset by training a model-free PPO (Schulman et al., 2017) agent on Swimmer and recording the trajectories that it takes. This ensures that the dataset contains trajectories that involve interacting with the environment in nontrivial ways. We then split that data into a training set and a validation set and train each model to convergence on the training set. Figure 5 shows the results of evaluating these models on the validation set. While the ACP is able to make extremely accurate predictions a few steps into the future, it suffers from accumulating error when it is recursively applied many times. This suggests an explanation for the inability of the ACP models to take advantage of planning horizons longer than 60 timesteps, as discussed in Section 3.1. As the horizon increases the predictions from an ACP diverge from reality, while simultaneously the bias from using a too-short planning horizon decreases. This produces an optimal planning horizon of intermediate length given a model with error that increases as a function of depth. The RNN and the PCP are both optimized for long-term prediction accuracy and thus make much better predictions. The PCP model achieves slightly better accuracy, and does it in a fraction of the time; to predict 200 steps in the future requires 200 applications of the RNN network, but only 4 of the PCP. This performance gap is significant, as planning requires evaluating hundreds of thousands of trajectories per episode. 3.4 MODEL ACCURACY VS REWARD Figure 6 shows the reward vs prediction error for 10 models at various points during online training in the Swimmer environment. These results show a clear relationship between low prediction error of the model and high reward, reinforce the importance of having a highly accurate long-range model of the environment. Taken together with the high error for ACPs in Figure 5, this explains that the RL performance of action-conditional predictors is limited by their inability to make accurate predictions at long timescales. The left panel of Figure 6 shows a very vertical trend at the far left of the plot. We hypothesize that this is due to the changing distribution of the training data as a function of the predictor’s accuracy; once a predictor is making accurate predictions, the trajectories that it follows change from being nearly random to more focused. This then means that most of the progress late in training is on refining the predictions on a very narrow distribution of trajectories. These refinements continue to improve the RL performance of the predictor but have little impact on its accuracy along trajectories coming from a different policy. 3.5 COMPARISON TO OTHER APPROACHES In this experiment we evaluate the performance of ACP and PCP models compared to previous reinforcement learning methods, both model-free and model-based, on Swimmer-v2 and HalfCheetah-v2 from OpenAI Gym (Brockman et al., 2016)4. We also show the performance of an RNN model, which is identical to ACP but trained via backpropagation through time (BPTT) to minimize prediction error across the entire planning horizon, i.e. 100 steps for Swimmer and 20 steps for HalfCheetah. Since the PCP models predict R timesteps per network application (versus one forward pass through the network per timestep for ACP and RNN models) the PCP models are a factor of R faster to plan with in wall-clock time. Our main model-based baseline is PETS (Chua et al., 2018), a state of the art probabilistic neural network-based MBRL algorithm which has been shown to equal or exceed model-free performance on short-horizon tasks. Similar to this work, PETS uses MPC and CEM for model-based control and aggregates a dataset online. On HalfCheetah we also compare with the model-based results from Nagabandi et al. (2017), which follows the same basic formula as our work but uses random shooting to find optimal plans instead of CEM. Our model-free baseline is PPO (Schulman et al., 2017) as implemented by Kostrikov (2018), a high-performing actor-critic method. For each method we run five seeds and allow the algorithm to run to convergence, as we are interested in evaluating asymptotic performance. We train PPO for 100,000 trajectories. On Swimmer and HalfCheetah we use planning horizons of 100 and 20 timesteps respectively for the ACP, PCP, RNN, and PETS results. For each of the baseline methods we plot a horizontal line indicating the best score achieved by that method at any point in training, after averaging over the random seeds and over several consecutive episodes. We also show lines for the score achieved by using the groundtruth dynamics with the same planner as we use for PCP. In the case of Nagabandi et al. (2017) the score shown is that reported in their work; while the version of the HalfCheetah environment that they use is slightly different from the Gym one, we believe the numbers to be roughly comparable. 4We selected Swimmer and HalfCheetah to follow the main experiments from Nagabandi et al. (2017). Figure 7 shows the results of this experiment. On Swimmer, which has a very long planning horizon, PCP achieves rewards more than 50% higher than the next-best method, while on HalfCheetah it equals the performance of PPO. We speculate that the extremely high rewards achieved by Chua et al. (2018) on HalfCheetah are due in part to the difference in settings for CEM betweeen our two works; while we use 3 steps of CEM optimization with 50 candidates per step, Chua et al. (2018) use 5 steps of optimization on 500 candidates. 4 DISCUSSION In this work we considered the problem of model bias in model-based reinforcement learning. In the largely deterministic environments considered, we show that optimal planning horizons can be large, beyond 100 timesteps. Over these horizons, NN-based models trained to minimize single-step prediction accuracy do not perform well. We demonstrate that better performance is possible with NN models by changing the loss function and the form of the model. Further experiments confirm that model accuracy is crucial to end task performance. Our experiments make several simplifying assumptions, most notably the availability of unlimited samples and deterministic environment dynamics. Sample complexity would undoubtedly be improved by replacing the current overparameterized MLP architecture with something more efficient, an interesting future direction. Another important area for future work is understanding the interaction of long-range planning with stochasticity in the environment, including the development of generative models capable of predictions over long horizons. APPENDIX A PLANNING WITH A GROUND TRUTH MODEL
1. What is the main contribution of the paper in the field of model-based RL? 2. What are the strengths and weaknesses of the proposed multi-step prediction model in RL? 3. How does the reviewer assess the limitation of the proposed approach in terms of its applicability to stochastic systems? 4. What is the significance of the assumption made in the paper regarding the proxy for sum of R rewards? 5. How does the reviewer evaluate the experimental comparison made in the paper, particularly the absence of certain baselines? 6. Is there a connection between the proposed R-step model-based RL approach and the use of options? If so, how can it be discussed further? 7. Are there any other relevant works in time-series modeling or multi-step prediction that the paper could have discussed?
Review
Review The paper proposes to use a multi-step prediction model in model-based RL. The proposed model maps from current state and a sequence of actions to the state after taking those actions. The paper demonstrates on 2 tasks that in a model-predictive control loop combined with planning by cross-entropy method, this can yield better asymptotic performance than using single-step models. The insight of using multi-step prediction models is certainly appealing and makes a lot of sense in deterministic tasks. A systematic empirical comparison of multi-step deep models in RL is of interest, which this paper does provide to some extent. An obvious limitation of the proposed deterministic multi-step forward model is the restriction to deterministic systems. One would expect that the performance deteriorates quickly as the system becomes more stochastic. An extension to the stochastic case along the lines of Chua et al, 2018 is non-trivial as capturing the stochasticity is typically more challenging in long-term predictions. Yet, the paper makes an additional assumption that is less clearly communicated: To be able to plan with a R-step model, one needs to be able to evaluate or approximate the sum of R rewards just from the first and last state in that R-long sequence. This work uses simply the reward at the end r(s_{t+R}) as a proxy which works well in these MuJoCo tasks but can fail horribly in others. One can imagine that a model not only outputs s_{t+R} but also the sum of R rewards given s_t and a_{t:t+R} which could work in more general settings but this is not explored in this paper. The contribution in this paper limited as the proposed approach as well as the experimental comparison is restricted to a relatively specific class of problems and no attempts to generalize are made. The experiments nicely compare against using single-step dynamics models and the results show that using the multi-step models for MPC performs better in the two considered tasks. However, as fas as I understand both the ACP and Chua et al baseline using the single-step prediction accuracy to train their models. The paper is missing a comparison to single-step models that are trained using multi-step prediction losses ("backprop through time" as in Learning Nonlinear Dynamic Models by Langford et al 2009). These models should be much more robust to error blow-up for multi-step prediction and do not require the specific reward structure assumed in this paper. The proposed R-step model-based RL approach could be connected to the use of options (the planner and model operate on R-step options, but the MPC does update the policy after every time step). It would be interesting to discuss this potential connection in the paper. The paper does a good job of discussing existing recent work in the deep RL literature but it would also be good to also discuss earlier work on multi-step prediction (e.g. in time-series modeling). All in all, I think the paper makes a small contribution demonstrating that multi-step models are useful for model-based RL in specific domains -- which is interesting but certainly not surprising. Unfortunately the paper stops somewhat early by not comparing to relevant baselines (single-step models trained with multi-step losses) and by not considering tasks where the benefit of multi-step planning would be less clear.
ICLR
Title Understanding the Asymptotic Performance of Model-Based RL Methods Abstract In complex simulated environments, model-based reinforcement learning methods typically lag the asymptotic performance of model-free approaches. This paper uses two MuJoCo environments to understand this gap through a series of ablation experiments designed to separate the contributions of the dynamics model and planner. These reveal the importance of long planning horizons, beyond those typically used. A dynamics model that directly predicts distant states, based on current state and a long sequence of actions, is introduced. This avoids the need for many recursions during long-range planning, and thus is able to yield more accurate state estimates. These accurate predictions allow us to uncover the relationship between model accuracy and performance, and translate to higher task reward that matches or exceeds current state-of-the-art model-free approaches. 1 INTRODUCTION Model-based reinforcement learning (MBRL) has many potential benefits over model-free approaches. These include (i) the ability to generalize to new tasks in the environment, without having to retrain; (ii) learning from off-policy data and (iii) sample efficiency. However, in simulated environments where data is plentiful, model-based approaches struggle to approach the asymptotic performance of model-free methods Nagabandi et al. (2017); Pong et al. (2018); Chua et al. (2018). Several possible explanations present themselves: the planner used for selecting optimal actions under the model might be insufficiently powerful; the model might not be able to accurately model the dynamics; or the planning horizon might not be long enough. This paper address these questions by teasing apart the different factors involved in an MBRL framework, applied to two deterministic MuJoCo environments (Todorov et al., 2012), with the aim of understanding the gap in asymptotic performance with respect to model-free approaches. In particular, we demonstrate that bias caused by short planning horizons and poor accuracy of long-term predictions is the cause of the poor performance of existing MBRL methods in the unlimited-sample regime. Our experiments show that, with a perfect dynamics model, the optimal planing horizon can be over 100 steps – much longer than typically considered in many MBRL approaches. Correspondingly, the performance is typically limited by the ability of the model to accurately predict over long-time scales, not just a few time-steps. Existing approaches to MBRL rely on a single-step dynamics model that predict the next state, given the current state and an action. As can be see in Figure 5, over long time-horizons the errors compound due to recursive application of the model, yielding inaccurate state estimates which are not useful for planning. Instead, we propose an alternate form of dynamics model that takes as input a sequence of actions along with the current state and directly predicts many time-steps into the future. This approach provides accurate prediction over long time horizons, allowing us to uncover the relationship between model accuracy and performance. This reveals that MBRL with sufficiently good learned models matches or exceeds the performance of state-of-the-art model-free methods. 1.1 RELATED WORK Non-Parametric Model-Based RL: Gaussian processes are popular approach to modeling nonlinear dynamics due to their low sample complexity and their ability to explicitly represent epistemic uncertainty. Consequently, numerous MBRL approaches use them, e.g. Kocijan et al. (2004); Ko et al. (2007); Grancharova et al. (2008); Deisenroth & Rasmussen (2011); Deisenroth et al. (2014). However via the choice of kernel they impose potentially unrealistic smoothness constraints and do not scale to large data settings, limiting their asymptotic performance in practice. Combining model-based and model-free methods: Due to the sample efficiency of model-based methods and superior asymptotic performance of model-free methods, several works have proposed to learn dynamics models using a few trajectory samples, then use those models to train or augment a model-free policy. The classic Dyna algorithm (Sutton, 1990) uses a model to extend Bellman updates multiple steps. Deisenroth & Rasmussen (2011) learns a Gaussian process model of the dynamics function and uses it to train an RBF network policy, and Gal et al. (2016) enables the model to scale to larger data by using Bayesian neural networks in place of GPs. Levine et al. (2016) fits a time-varying locally linear model around a trajectory, then trains a neural network policy to follow trajectories found by iLQR (Todorov & Li, 2005). Silver et al. (2016) learns an implicit model of the dynamics for implicit planning via value estimation; in an inversion of this technique, Pong et al. (2018) learn an explicit model of Q values for explicit planning via constrained optimization. Weber et al. (2017) learns a neural network dynamics model which is unrolled inside a policy to inform an actor-critic agent. Nagabandi et al. (2017) trains a neural network dynamics model on control tasks and uses it to take actions, then uses that model-based policy to speed the training of a model-free policy via imitation learning. These works largely seek to either (i) augment a modelfree method with a model for faster learning, or (ii) make up for the asymptotic deficiencies of a model-based method by transitioning to model-free. In this work we instead directly investigate the causes of MBRL’s poor asymptotic performance with the aim of making a transition to model-free unnecessary. MBRL with neural network models: The idea of using neural networks to enable model-based control of nonlinear systems goes back decades (Miller et al., 1990; Schmidhuber, 1990; Hunt et al., 1992; Bekey & Goldberg, 2012; Draeger et al., 1995), but until recently has only seen significant success on systems with relatively simple dynamics. Several works have endeavored to use neural network generative models of images for model-based control (Wahlström et al., 2015; Watter et al., 2015; Finn & Levine, 2017); these policies have typically used short planning horizons and struggled to equal model-free performance on complex tasks. Lenz et al. (2015) learn recurrent neural network dynamics models, then use backpropagation through time to select actions and control a robotic arm, and Henaff et al. (2017) extend this concept to both discrete and continuous action spaces. Clavera et al. (2018) combine meta-learning with MBRL using neural network models to rapidly adapt to novel environments. Srinivas et al. (2018) uses imitation learning to train a model to plan by gradient descent, which relies on an existing expert rather than learning from scratch. The closest work to ours is Chua et al. (2018). This follows a similar recipe, with similar planning and constructing a dataset online, but with different models and different goals. We use deterministic neural networks which predict many steps into the future to understand the impact of model- and horizon-bias on asymptotic performance of MBRL methods on long-horizon problems. Chua et al. (2018) uses a bootstrapped ensemble of probabilistic neural networks to improve the performance of MBRL in the few-sample regime. While that work achieves strong performance on short-horizon tasks, in our experiments we find it struggles to equal model-free methods on tasks with very long horizons. 2 APPROACH In this section we describe the models used in our experiments, the action-conditional predictor (ACP) and the novel plan-conditional predictor (PCP), which predicts the outcome of a sequence of actions with a single model step. We then detail the framework we use for planning with and training these models. 2.1 NOTATION We denote states and actions at a time t by st and at. In the environments we consider, both st and at are continuous vectors. We use H to refer to the planning horizon of an MPC policy. We refer to a sequence of actions as a plan; a plan constructed with horizon H is thus p = {a1, ..., aH}. In a set of plans {p1, ..., pn}, aij refers to the jth action of the ith plan. We consider models which predict a future state given the current state and one or more actions. We denote R to be the range of such a model, which is the number of steps this model predicts in a single application: fR(st, at, . . . , at+R−1; θ) = s̃t+R, for parameters θ. We apply the model recursively using the notation FT (st, at, . . . , aT−1; θ) = fR(. . . fR(fR(st, at, . . . , at+R−1; θ), at+R, . . . , at+2R−1; θ) . . . , aT−R, . . . , aT−1; θ) = s̃T . That is, FT () applies fR() recursively T/R times. 2.2 PLAN-CONDITIONAL PREDICTORS To test our conjecture that compounding errors limit the asymptotic performance of existing models on long-horizon RL tasks, we propose plan-conditional predictors (PCPs). A PCP takes the form fR(st, at, . . . , at+R−1) = s̃t+R for some range R > 1. If R = 1, then it reduces to the standard approach (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Henaff et al., 2017; Nagabandi et al., 2017; Chua et al., 2018) which predicts only a single time-step at a time, and which we call an action-conditional predictor (ACP). As shown in Figure 1, a PCP can predict H steps into the future using H/R recursive applications of the model instead of H applications required by an actionconditional predictor. There are many possible parameterizations that could be used in a PCP. In this work we choose deep fully-connected neural networks. There are several reasons for this: (i) the space of inputs grows exponentially with the range R, thus models with high capacity are needed to minimize model bias; (ii) since the goal is to understand asymptotic performance, a large data regime is assumed and sample complexity is a secondary issue; (iii) by predicting R steps with a single network application, they are extremely fast at planning time. An obvious alternate parameterization is recurrent neural networks (RNNs), and we provide experimental comparisons between the two approaches in Section 3. For both Swimmer and HalfCheetah, the ACP and PCP networks consist of fully-connected networks with 9 hidden layers with 1000 units each using the SELU activation function (Klambauer et al., 2017). The input state and action(s) are concatenated before being used as input to the network. The same loss function is used for training PCPs and ACPs, namely ‖s̃t+R − st+R‖22, where s is the raw MuJoCo state. We assume that the environment forms a Markov decision process (MDP) (Bellman, 1957) with deterministic dynamics, properties shared by the tasks considered in this work. These assumptions allow us to focus exclusively on the significance of model fidelity and planning horizon to MBRL, but removing them is an interesting direction for future work. 2.2.1 INTERMEDIATE PREDICTIONS For visualization purposes, in some experiments we use a variant of a plan-conditional predictor which takes as input a state and a variable number of actions R′ where 1 ≤ R′ ≤ R, the remainder of action inputs being set to zero. This allows us to plot error or render video of the PCP’s predictions at each intermediate timestep instead of only at multiples of R. Furthermore, this variant of the model allows for planning based on reward functions which operate at each timestep rather than just at the end of the episode (see Section 2.3.1). As such, this variant of the model is applicable to any environment. 2.3 SELECTING OPTIMAL ACTIONS In order to turn a predictor into a policy, we employ an off-the-shelf planning approach, namely the cross-entropy method (CEM) (Botev, 2011), to find a plan that is optimal up to some horizonH . We take the first action from that plan and then replan, a technique known as model-predictive control (MPC) or receding-horizon control (Mayne & Michalska, 1990). 2.3.1 PLANNING WITH CROSS-ENTROPY METHOD Given a predictor F , a horizon H , and a reward function r, we would like to find an optimal plan: p∗t = argmax at,...,at+H−1 H−1∑ i=0 r(s̃t+i, at+i) | s̃t+i = FH(st, at, . . . , at+i−1) For both MuJoCo environments, the reward function is dominated by the distance traveled in the xdimension at each timestep1. Thus for planning purposes we can replace the original reward function with a sparse one which provides reward equal to the x-distance traveled at the end of the episode: r̂(st) = { st[x] if t == T 0 otherwise . We then substitute t +H for T and plan based on r̂(st+H); note this is identical to using the sum of x-progress at each timestep t...t +H . This reduces the form of the optimal plan under a predictor F to p∗t = argmax at,...,at+H−1 r̂(FH(st, at, . . . , at+H−1)) The cross-entropy method starts with a set of plans p drawn from a candidate distribution C. In continuous control tasks, sampling actions independently along a trajectory results in near-zero net motion. Therefore it is common to instead use correlated action noise for exploration or trajectory sampling, e.g. an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) as in DDPG (Lillicrap et al., 2015). We define the candidate C(·) distribution by the following sampling process: a0 ∼ U(−1, 1) at+1 = min(max(N (µ = at, σ = 0.2),−1), 1) i.e. the sampled actions are clamped to be in the range ±1 which are the limits of the action space. The overall planning framework is shown in Figure 2. After drawing an initial set of N plans {p1, . . . , pN} ∼ C(·), these are passed through the predictor F to estimate rewards r1, . . . , rN , which are used to rank the plans. The top K are then passed to a 2nd round (red box). Additionally, their mean and variance are computed2 and used as parameters for a Gaussian distribution from which N −K new plans are sampled (green box). The combined set of plans are then passed to the PCP to rank them. The output from the planner is the first action from the top-ranked plan (yellow box) at the final planning round, which is executed by the agent in the environment. The top K trajectories from the final round are used to seed the initial set of plans for replanning at the next timestep, after clipping the first action from each. In practice, we use 3 rounds of planning at each timestep, i.e. two rounds of resampling (the figure omits the 3rd round for clarity). In our experiments, we use N = 50 and K = 5. 1In all experiments we evaluate a policy using the original reward function from OpenAI Gym. This simplified reward function is exclusively used inside the policy. We found in experiments using the ground-truth dynamics as a model that planning with the true reward function instead made no significant difference on these tasks. 2Independently at each timestep and action dimension; µ(p1, . . . , pn) = { 1 n ∑ i a i j | j ∈ H } and σ(p1, . . . , pn) = {√ 1 n ∑ i ( aij − µ(a1...nj ) )2 | j ∈ H}. Ac#on   executed   p1   p2   pN   p1   p2   pN   C   Round  1   Round  2   F  pK   pK   N(μ,σ)   … ..   … ..   … ..   … ..   r1   r2   rN   rK  ~   ~   F   r1   r2   rN   rK   Sort   Time     step  1   Sort   Seed  next     #mestep   Figure 2: Our off-the-shelf planner, based on the cross-entropy method (Botev, 2011). See text for details. Algorithm 1 On-policy data aggregation and training Initialize dataset D with trajectories from random policy while not converged do θ ← argminΘ E (fR(st, at...t+R−1)− st+R)2 st,at...t+R−1,st+R∼D for m = 0...M do for t = 0...T do st ← Env.get observation() pt ← argmaxat...t+H−1 r̂(FH(st, at...t+H−1; θ)) at ← head(pt) Env.execute(at). end for D← D ⋃ (s0...T , a0...T−1) end for end while 2.4 ONLINE TRAINING The planning framework described above turns the PCP model into a policy which outputs an action at each time step. To train this policy, the underlying PCP model must be updated in an online fashion. This requires a dataset of trajectories {s0...T , a0...T−1} that covers the environment’s state-action space. We follow Nagabandi et al. (2017); Chua et al. (2018) and others and collect this dataset by alternating between fitting the model to the existing data and using our planning procedure (2.3.1) to generate more trajectories from the environment. We collect trajectories from the environment for M = 100 episodes. These trajectories are added to the training set and the PCP model is updated with SGD for 10 epochs using AMSGrad (Reddi et al., 2018). The overall procedure is detailed in Algorithm 1 and is essentially the standard template for MBRL (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Nagabandi et al., 2017; Chua et al., 2018). 3 EXPERIMENTS 3.1 EFFECT OF PLANNING HORIZON AND MODEL RANGE In this experiment we directly test our hypothesis that plan-conditional predictors are able to benefit from longer planning horizons than action-conditional predictors. Figure 3 shows that while an ACP model is competitive for planning horizons up to 20 timesteps, its performance falls substantially below the PCP models by 40 timesteps. The ACP model scores best at a horizon of 60 timesteps; beyond that its performance degrades as its predictions become unusably inaccurate. By contrast the PCP models shown, which need only be recursively applied between 3 and 20 times, all show monotonically increasing rewards as the planning horizon is increased. This reveals two things: the Swimmer task has a minimum optimal planning horizon of at least 100 timesteps, and the PCP models are able to predict with sufficient accuracy to be useful even over that long horizon. In the next experiments we tease apart the different factors of planning horizon, range, and accuracy that combine to produce these results. 3.2 PLANNING HORIZON VS REWARD Using a ground-truth model of the environment (that is, MuJoCo itself) in conjunction with our planner we can look at performance as a function of planning horizon for these tasks. That is, we define a new predictor MJC(st, at...t+T ) which can make predictions arbitrarily far into the future. This predictor works internally by creating a new copy env internal of the Gym environment. To make a prediction MJC(st, at...t+T ) = st+1, this predictor will call env internal.set state(st), then repeatedly call env internal.step(at+i) for i = 0...T . Its output is the observation after the final action has been executed. This ground-truth predictor can be used for planning like any other. The results of this experiment, shown in Figure 4, show that the optimal planning horizons for Swimmer and HalfCheetah are at around 150 and 40 timesteps, respectively. These results provide additional clarity to those presented in the previous experiment. Policies that use the groundtruth dynamics as their predictor perform better as the horizon increases due to the decreased bias of the long-horizon reward estimate. The approximate dynamics model from PCPs show similar gains. However, beyond a certain planning horizon the quasi-random search in the planner becomes less effective due to variance caused by the huge size of the search space, causing the reward to dip for H > 150. Previous work (Nagabandi et al., 2017; Chua et al., 2018) has demonstrated planning for 20 or 30 timesteps with a neural network dynamics model (and in particular, the recent Chua et al. (2018) achieves impressive scores on HalfCheetah). However, to our knowledge planning horizons above 50 timesteps remain untested. This leads us to investigate the accuracy at very long range prediction of traditional action-conditional predictors as well as our plan-conditional predictors. 3We found it necessary to handicap the ground-truth model on HalfCheetah by adding noise to its actions during planning to prevent the planner from breaking the simulation. Without this handicap the planner was able to find nonphysical strategies and achieves expected reward of up to 150,000. 3.3 MODEL RANGE VS ACCURACY With the evidence from Section 3.2 that some tasks require planning horizons up to 150 timesteps, we now evaluate action-conditional and plan-conditional predictors on their ability to make longrange predictions in MuJoCo. Additionally we compare to an RNN which predicts one step at a time, but which is trained with backpropagation through time (BPTT) to minimize prediction error across all timesteps. This RNN is trained with a curriculum of prediction lengths ranging from 1 (at the beginning of training) to 200 (at the end). To enable direct comparisons, we employ a fixed dataset of trajectories from the environment. We generate this dataset by training a model-free PPO (Schulman et al., 2017) agent on Swimmer and recording the trajectories that it takes. This ensures that the dataset contains trajectories that involve interacting with the environment in nontrivial ways. We then split that data into a training set and a validation set and train each model to convergence on the training set. Figure 5 shows the results of evaluating these models on the validation set. While the ACP is able to make extremely accurate predictions a few steps into the future, it suffers from accumulating error when it is recursively applied many times. This suggests an explanation for the inability of the ACP models to take advantage of planning horizons longer than 60 timesteps, as discussed in Section 3.1. As the horizon increases the predictions from an ACP diverge from reality, while simultaneously the bias from using a too-short planning horizon decreases. This produces an optimal planning horizon of intermediate length given a model with error that increases as a function of depth. The RNN and the PCP are both optimized for long-term prediction accuracy and thus make much better predictions. The PCP model achieves slightly better accuracy, and does it in a fraction of the time; to predict 200 steps in the future requires 200 applications of the RNN network, but only 4 of the PCP. This performance gap is significant, as planning requires evaluating hundreds of thousands of trajectories per episode. 3.4 MODEL ACCURACY VS REWARD Figure 6 shows the reward vs prediction error for 10 models at various points during online training in the Swimmer environment. These results show a clear relationship between low prediction error of the model and high reward, reinforce the importance of having a highly accurate long-range model of the environment. Taken together with the high error for ACPs in Figure 5, this explains that the RL performance of action-conditional predictors is limited by their inability to make accurate predictions at long timescales. The left panel of Figure 6 shows a very vertical trend at the far left of the plot. We hypothesize that this is due to the changing distribution of the training data as a function of the predictor’s accuracy; once a predictor is making accurate predictions, the trajectories that it follows change from being nearly random to more focused. This then means that most of the progress late in training is on refining the predictions on a very narrow distribution of trajectories. These refinements continue to improve the RL performance of the predictor but have little impact on its accuracy along trajectories coming from a different policy. 3.5 COMPARISON TO OTHER APPROACHES In this experiment we evaluate the performance of ACP and PCP models compared to previous reinforcement learning methods, both model-free and model-based, on Swimmer-v2 and HalfCheetah-v2 from OpenAI Gym (Brockman et al., 2016)4. We also show the performance of an RNN model, which is identical to ACP but trained via backpropagation through time (BPTT) to minimize prediction error across the entire planning horizon, i.e. 100 steps for Swimmer and 20 steps for HalfCheetah. Since the PCP models predict R timesteps per network application (versus one forward pass through the network per timestep for ACP and RNN models) the PCP models are a factor of R faster to plan with in wall-clock time. Our main model-based baseline is PETS (Chua et al., 2018), a state of the art probabilistic neural network-based MBRL algorithm which has been shown to equal or exceed model-free performance on short-horizon tasks. Similar to this work, PETS uses MPC and CEM for model-based control and aggregates a dataset online. On HalfCheetah we also compare with the model-based results from Nagabandi et al. (2017), which follows the same basic formula as our work but uses random shooting to find optimal plans instead of CEM. Our model-free baseline is PPO (Schulman et al., 2017) as implemented by Kostrikov (2018), a high-performing actor-critic method. For each method we run five seeds and allow the algorithm to run to convergence, as we are interested in evaluating asymptotic performance. We train PPO for 100,000 trajectories. On Swimmer and HalfCheetah we use planning horizons of 100 and 20 timesteps respectively for the ACP, PCP, RNN, and PETS results. For each of the baseline methods we plot a horizontal line indicating the best score achieved by that method at any point in training, after averaging over the random seeds and over several consecutive episodes. We also show lines for the score achieved by using the groundtruth dynamics with the same planner as we use for PCP. In the case of Nagabandi et al. (2017) the score shown is that reported in their work; while the version of the HalfCheetah environment that they use is slightly different from the Gym one, we believe the numbers to be roughly comparable. 4We selected Swimmer and HalfCheetah to follow the main experiments from Nagabandi et al. (2017). Figure 7 shows the results of this experiment. On Swimmer, which has a very long planning horizon, PCP achieves rewards more than 50% higher than the next-best method, while on HalfCheetah it equals the performance of PPO. We speculate that the extremely high rewards achieved by Chua et al. (2018) on HalfCheetah are due in part to the difference in settings for CEM betweeen our two works; while we use 3 steps of CEM optimization with 50 candidates per step, Chua et al. (2018) use 5 steps of optimization on 500 candidates. 4 DISCUSSION In this work we considered the problem of model bias in model-based reinforcement learning. In the largely deterministic environments considered, we show that optimal planning horizons can be large, beyond 100 timesteps. Over these horizons, NN-based models trained to minimize single-step prediction accuracy do not perform well. We demonstrate that better performance is possible with NN models by changing the loss function and the form of the model. Further experiments confirm that model accuracy is crucial to end task performance. Our experiments make several simplifying assumptions, most notably the availability of unlimited samples and deterministic environment dynamics. Sample complexity would undoubtedly be improved by replacing the current overparameterized MLP architecture with something more efficient, an interesting future direction. Another important area for future work is understanding the interaction of long-range planning with stochasticity in the environment, including the development of generative models capable of predictions over long horizons. APPENDIX A PLANNING WITH A GROUND TRUTH MODEL
1. What is the focus of the paper regarding predictive control? 2. What is the key idea proposed by the authors, and how does it differ from prior approaches? 3. What are the strengths and weaknesses of the proposed method, particularly in comparison to other recent algorithms? 4. How does the reviewer assess the significance and applicability of the approach, especially regarding its potential impact on the community? 5. Are there any concerns or suggestions regarding the comparisons made in the paper, specifically with respect to the choice of settings for CEM optimization? 6. Are there any other relevant works that the authors could consider comparing their method to, such as Clavera et al. (Sep 2018)?
Review
Review The authors learn a model that predicts the state R steps in the future, given the current state and intervening actions, instead of the predicting the next time step state. The model is then used for standard model predictive control. The authors find numerically that their method, termed Plan-Conditional Predictor (PCP), performs better over long horizon times (~100 time steps), than other recent model-based and model-free algorithms. This because for long horizon time scales, the model predicting the state for the next time step accumulates error when used recursively. The key idea is to use a model that directly predicts multiple time steps into the future. While seemingly an obvious extension, it does not appear to have been used in current algorithms. A main issue that I find with this approach is: since only the state after R steps is predicted, reward r(s_t,a_t) can only be used every R steps, not at every step. The authors gloss over this issue because for both MuJoCo environments that they tested, they only need to consider reward at the end of the planning horizon. Thus to make their algorithm generally applicable, the authors also need to show how or whether their method can deal with rewards that may appear at any time step. Further, rather than speculate on the cause of the difference between their PCP and PETS (Chua et al 2018) on half-cheetah to be their different settings for CEM optimization (Fig 7b), the authors should just use the same settings to compare. Possibly the authors ran out of time to do this for the current submission, but should certainly do it for the final version. While the authors have already compared to other algorithms with similar aims, eg Chua et al 2018, they may also wish to compare to a recent preprint Clavera et al Sep 2018, which also aims to combine the sample efficiency of model-based methods while achieving the performance of model-free ones, by using an ensemble of models, over a 200 time step horizon. However, given the recency of this algorithm, I don't consider this essential. Overall, I feel that the authors idea of an R-step model is worth spreading in the community, if the above two main points are addressed. At the same time, I can only rate it at the border of the cutoff mark.
ICLR
Title Understanding the Asymptotic Performance of Model-Based RL Methods Abstract In complex simulated environments, model-based reinforcement learning methods typically lag the asymptotic performance of model-free approaches. This paper uses two MuJoCo environments to understand this gap through a series of ablation experiments designed to separate the contributions of the dynamics model and planner. These reveal the importance of long planning horizons, beyond those typically used. A dynamics model that directly predicts distant states, based on current state and a long sequence of actions, is introduced. This avoids the need for many recursions during long-range planning, and thus is able to yield more accurate state estimates. These accurate predictions allow us to uncover the relationship between model accuracy and performance, and translate to higher task reward that matches or exceeds current state-of-the-art model-free approaches. 1 INTRODUCTION Model-based reinforcement learning (MBRL) has many potential benefits over model-free approaches. These include (i) the ability to generalize to new tasks in the environment, without having to retrain; (ii) learning from off-policy data and (iii) sample efficiency. However, in simulated environments where data is plentiful, model-based approaches struggle to approach the asymptotic performance of model-free methods Nagabandi et al. (2017); Pong et al. (2018); Chua et al. (2018). Several possible explanations present themselves: the planner used for selecting optimal actions under the model might be insufficiently powerful; the model might not be able to accurately model the dynamics; or the planning horizon might not be long enough. This paper address these questions by teasing apart the different factors involved in an MBRL framework, applied to two deterministic MuJoCo environments (Todorov et al., 2012), with the aim of understanding the gap in asymptotic performance with respect to model-free approaches. In particular, we demonstrate that bias caused by short planning horizons and poor accuracy of long-term predictions is the cause of the poor performance of existing MBRL methods in the unlimited-sample regime. Our experiments show that, with a perfect dynamics model, the optimal planing horizon can be over 100 steps – much longer than typically considered in many MBRL approaches. Correspondingly, the performance is typically limited by the ability of the model to accurately predict over long-time scales, not just a few time-steps. Existing approaches to MBRL rely on a single-step dynamics model that predict the next state, given the current state and an action. As can be see in Figure 5, over long time-horizons the errors compound due to recursive application of the model, yielding inaccurate state estimates which are not useful for planning. Instead, we propose an alternate form of dynamics model that takes as input a sequence of actions along with the current state and directly predicts many time-steps into the future. This approach provides accurate prediction over long time horizons, allowing us to uncover the relationship between model accuracy and performance. This reveals that MBRL with sufficiently good learned models matches or exceeds the performance of state-of-the-art model-free methods. 1.1 RELATED WORK Non-Parametric Model-Based RL: Gaussian processes are popular approach to modeling nonlinear dynamics due to their low sample complexity and their ability to explicitly represent epistemic uncertainty. Consequently, numerous MBRL approaches use them, e.g. Kocijan et al. (2004); Ko et al. (2007); Grancharova et al. (2008); Deisenroth & Rasmussen (2011); Deisenroth et al. (2014). However via the choice of kernel they impose potentially unrealistic smoothness constraints and do not scale to large data settings, limiting their asymptotic performance in practice. Combining model-based and model-free methods: Due to the sample efficiency of model-based methods and superior asymptotic performance of model-free methods, several works have proposed to learn dynamics models using a few trajectory samples, then use those models to train or augment a model-free policy. The classic Dyna algorithm (Sutton, 1990) uses a model to extend Bellman updates multiple steps. Deisenroth & Rasmussen (2011) learns a Gaussian process model of the dynamics function and uses it to train an RBF network policy, and Gal et al. (2016) enables the model to scale to larger data by using Bayesian neural networks in place of GPs. Levine et al. (2016) fits a time-varying locally linear model around a trajectory, then trains a neural network policy to follow trajectories found by iLQR (Todorov & Li, 2005). Silver et al. (2016) learns an implicit model of the dynamics for implicit planning via value estimation; in an inversion of this technique, Pong et al. (2018) learn an explicit model of Q values for explicit planning via constrained optimization. Weber et al. (2017) learns a neural network dynamics model which is unrolled inside a policy to inform an actor-critic agent. Nagabandi et al. (2017) trains a neural network dynamics model on control tasks and uses it to take actions, then uses that model-based policy to speed the training of a model-free policy via imitation learning. These works largely seek to either (i) augment a modelfree method with a model for faster learning, or (ii) make up for the asymptotic deficiencies of a model-based method by transitioning to model-free. In this work we instead directly investigate the causes of MBRL’s poor asymptotic performance with the aim of making a transition to model-free unnecessary. MBRL with neural network models: The idea of using neural networks to enable model-based control of nonlinear systems goes back decades (Miller et al., 1990; Schmidhuber, 1990; Hunt et al., 1992; Bekey & Goldberg, 2012; Draeger et al., 1995), but until recently has only seen significant success on systems with relatively simple dynamics. Several works have endeavored to use neural network generative models of images for model-based control (Wahlström et al., 2015; Watter et al., 2015; Finn & Levine, 2017); these policies have typically used short planning horizons and struggled to equal model-free performance on complex tasks. Lenz et al. (2015) learn recurrent neural network dynamics models, then use backpropagation through time to select actions and control a robotic arm, and Henaff et al. (2017) extend this concept to both discrete and continuous action spaces. Clavera et al. (2018) combine meta-learning with MBRL using neural network models to rapidly adapt to novel environments. Srinivas et al. (2018) uses imitation learning to train a model to plan by gradient descent, which relies on an existing expert rather than learning from scratch. The closest work to ours is Chua et al. (2018). This follows a similar recipe, with similar planning and constructing a dataset online, but with different models and different goals. We use deterministic neural networks which predict many steps into the future to understand the impact of model- and horizon-bias on asymptotic performance of MBRL methods on long-horizon problems. Chua et al. (2018) uses a bootstrapped ensemble of probabilistic neural networks to improve the performance of MBRL in the few-sample regime. While that work achieves strong performance on short-horizon tasks, in our experiments we find it struggles to equal model-free methods on tasks with very long horizons. 2 APPROACH In this section we describe the models used in our experiments, the action-conditional predictor (ACP) and the novel plan-conditional predictor (PCP), which predicts the outcome of a sequence of actions with a single model step. We then detail the framework we use for planning with and training these models. 2.1 NOTATION We denote states and actions at a time t by st and at. In the environments we consider, both st and at are continuous vectors. We use H to refer to the planning horizon of an MPC policy. We refer to a sequence of actions as a plan; a plan constructed with horizon H is thus p = {a1, ..., aH}. In a set of plans {p1, ..., pn}, aij refers to the jth action of the ith plan. We consider models which predict a future state given the current state and one or more actions. We denote R to be the range of such a model, which is the number of steps this model predicts in a single application: fR(st, at, . . . , at+R−1; θ) = s̃t+R, for parameters θ. We apply the model recursively using the notation FT (st, at, . . . , aT−1; θ) = fR(. . . fR(fR(st, at, . . . , at+R−1; θ), at+R, . . . , at+2R−1; θ) . . . , aT−R, . . . , aT−1; θ) = s̃T . That is, FT () applies fR() recursively T/R times. 2.2 PLAN-CONDITIONAL PREDICTORS To test our conjecture that compounding errors limit the asymptotic performance of existing models on long-horizon RL tasks, we propose plan-conditional predictors (PCPs). A PCP takes the form fR(st, at, . . . , at+R−1) = s̃t+R for some range R > 1. If R = 1, then it reduces to the standard approach (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Henaff et al., 2017; Nagabandi et al., 2017; Chua et al., 2018) which predicts only a single time-step at a time, and which we call an action-conditional predictor (ACP). As shown in Figure 1, a PCP can predict H steps into the future using H/R recursive applications of the model instead of H applications required by an actionconditional predictor. There are many possible parameterizations that could be used in a PCP. In this work we choose deep fully-connected neural networks. There are several reasons for this: (i) the space of inputs grows exponentially with the range R, thus models with high capacity are needed to minimize model bias; (ii) since the goal is to understand asymptotic performance, a large data regime is assumed and sample complexity is a secondary issue; (iii) by predicting R steps with a single network application, they are extremely fast at planning time. An obvious alternate parameterization is recurrent neural networks (RNNs), and we provide experimental comparisons between the two approaches in Section 3. For both Swimmer and HalfCheetah, the ACP and PCP networks consist of fully-connected networks with 9 hidden layers with 1000 units each using the SELU activation function (Klambauer et al., 2017). The input state and action(s) are concatenated before being used as input to the network. The same loss function is used for training PCPs and ACPs, namely ‖s̃t+R − st+R‖22, where s is the raw MuJoCo state. We assume that the environment forms a Markov decision process (MDP) (Bellman, 1957) with deterministic dynamics, properties shared by the tasks considered in this work. These assumptions allow us to focus exclusively on the significance of model fidelity and planning horizon to MBRL, but removing them is an interesting direction for future work. 2.2.1 INTERMEDIATE PREDICTIONS For visualization purposes, in some experiments we use a variant of a plan-conditional predictor which takes as input a state and a variable number of actions R′ where 1 ≤ R′ ≤ R, the remainder of action inputs being set to zero. This allows us to plot error or render video of the PCP’s predictions at each intermediate timestep instead of only at multiples of R. Furthermore, this variant of the model allows for planning based on reward functions which operate at each timestep rather than just at the end of the episode (see Section 2.3.1). As such, this variant of the model is applicable to any environment. 2.3 SELECTING OPTIMAL ACTIONS In order to turn a predictor into a policy, we employ an off-the-shelf planning approach, namely the cross-entropy method (CEM) (Botev, 2011), to find a plan that is optimal up to some horizonH . We take the first action from that plan and then replan, a technique known as model-predictive control (MPC) or receding-horizon control (Mayne & Michalska, 1990). 2.3.1 PLANNING WITH CROSS-ENTROPY METHOD Given a predictor F , a horizon H , and a reward function r, we would like to find an optimal plan: p∗t = argmax at,...,at+H−1 H−1∑ i=0 r(s̃t+i, at+i) | s̃t+i = FH(st, at, . . . , at+i−1) For both MuJoCo environments, the reward function is dominated by the distance traveled in the xdimension at each timestep1. Thus for planning purposes we can replace the original reward function with a sparse one which provides reward equal to the x-distance traveled at the end of the episode: r̂(st) = { st[x] if t == T 0 otherwise . We then substitute t +H for T and plan based on r̂(st+H); note this is identical to using the sum of x-progress at each timestep t...t +H . This reduces the form of the optimal plan under a predictor F to p∗t = argmax at,...,at+H−1 r̂(FH(st, at, . . . , at+H−1)) The cross-entropy method starts with a set of plans p drawn from a candidate distribution C. In continuous control tasks, sampling actions independently along a trajectory results in near-zero net motion. Therefore it is common to instead use correlated action noise for exploration or trajectory sampling, e.g. an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) as in DDPG (Lillicrap et al., 2015). We define the candidate C(·) distribution by the following sampling process: a0 ∼ U(−1, 1) at+1 = min(max(N (µ = at, σ = 0.2),−1), 1) i.e. the sampled actions are clamped to be in the range ±1 which are the limits of the action space. The overall planning framework is shown in Figure 2. After drawing an initial set of N plans {p1, . . . , pN} ∼ C(·), these are passed through the predictor F to estimate rewards r1, . . . , rN , which are used to rank the plans. The top K are then passed to a 2nd round (red box). Additionally, their mean and variance are computed2 and used as parameters for a Gaussian distribution from which N −K new plans are sampled (green box). The combined set of plans are then passed to the PCP to rank them. The output from the planner is the first action from the top-ranked plan (yellow box) at the final planning round, which is executed by the agent in the environment. The top K trajectories from the final round are used to seed the initial set of plans for replanning at the next timestep, after clipping the first action from each. In practice, we use 3 rounds of planning at each timestep, i.e. two rounds of resampling (the figure omits the 3rd round for clarity). In our experiments, we use N = 50 and K = 5. 1In all experiments we evaluate a policy using the original reward function from OpenAI Gym. This simplified reward function is exclusively used inside the policy. We found in experiments using the ground-truth dynamics as a model that planning with the true reward function instead made no significant difference on these tasks. 2Independently at each timestep and action dimension; µ(p1, . . . , pn) = { 1 n ∑ i a i j | j ∈ H } and σ(p1, . . . , pn) = {√ 1 n ∑ i ( aij − µ(a1...nj ) )2 | j ∈ H}. Ac#on   executed   p1   p2   pN   p1   p2   pN   C   Round  1   Round  2   F  pK   pK   N(μ,σ)   … ..   … ..   … ..   … ..   r1   r2   rN   rK  ~   ~   F   r1   r2   rN   rK   Sort   Time     step  1   Sort   Seed  next     #mestep   Figure 2: Our off-the-shelf planner, based on the cross-entropy method (Botev, 2011). See text for details. Algorithm 1 On-policy data aggregation and training Initialize dataset D with trajectories from random policy while not converged do θ ← argminΘ E (fR(st, at...t+R−1)− st+R)2 st,at...t+R−1,st+R∼D for m = 0...M do for t = 0...T do st ← Env.get observation() pt ← argmaxat...t+H−1 r̂(FH(st, at...t+H−1; θ)) at ← head(pt) Env.execute(at). end for D← D ⋃ (s0...T , a0...T−1) end for end while 2.4 ONLINE TRAINING The planning framework described above turns the PCP model into a policy which outputs an action at each time step. To train this policy, the underlying PCP model must be updated in an online fashion. This requires a dataset of trajectories {s0...T , a0...T−1} that covers the environment’s state-action space. We follow Nagabandi et al. (2017); Chua et al. (2018) and others and collect this dataset by alternating between fitting the model to the existing data and using our planning procedure (2.3.1) to generate more trajectories from the environment. We collect trajectories from the environment for M = 100 episodes. These trajectories are added to the training set and the PCP model is updated with SGD for 10 epochs using AMSGrad (Reddi et al., 2018). The overall procedure is detailed in Algorithm 1 and is essentially the standard template for MBRL (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Nagabandi et al., 2017; Chua et al., 2018). 3 EXPERIMENTS 3.1 EFFECT OF PLANNING HORIZON AND MODEL RANGE In this experiment we directly test our hypothesis that plan-conditional predictors are able to benefit from longer planning horizons than action-conditional predictors. Figure 3 shows that while an ACP model is competitive for planning horizons up to 20 timesteps, its performance falls substantially below the PCP models by 40 timesteps. The ACP model scores best at a horizon of 60 timesteps; beyond that its performance degrades as its predictions become unusably inaccurate. By contrast the PCP models shown, which need only be recursively applied between 3 and 20 times, all show monotonically increasing rewards as the planning horizon is increased. This reveals two things: the Swimmer task has a minimum optimal planning horizon of at least 100 timesteps, and the PCP models are able to predict with sufficient accuracy to be useful even over that long horizon. In the next experiments we tease apart the different factors of planning horizon, range, and accuracy that combine to produce these results. 3.2 PLANNING HORIZON VS REWARD Using a ground-truth model of the environment (that is, MuJoCo itself) in conjunction with our planner we can look at performance as a function of planning horizon for these tasks. That is, we define a new predictor MJC(st, at...t+T ) which can make predictions arbitrarily far into the future. This predictor works internally by creating a new copy env internal of the Gym environment. To make a prediction MJC(st, at...t+T ) = st+1, this predictor will call env internal.set state(st), then repeatedly call env internal.step(at+i) for i = 0...T . Its output is the observation after the final action has been executed. This ground-truth predictor can be used for planning like any other. The results of this experiment, shown in Figure 4, show that the optimal planning horizons for Swimmer and HalfCheetah are at around 150 and 40 timesteps, respectively. These results provide additional clarity to those presented in the previous experiment. Policies that use the groundtruth dynamics as their predictor perform better as the horizon increases due to the decreased bias of the long-horizon reward estimate. The approximate dynamics model from PCPs show similar gains. However, beyond a certain planning horizon the quasi-random search in the planner becomes less effective due to variance caused by the huge size of the search space, causing the reward to dip for H > 150. Previous work (Nagabandi et al., 2017; Chua et al., 2018) has demonstrated planning for 20 or 30 timesteps with a neural network dynamics model (and in particular, the recent Chua et al. (2018) achieves impressive scores on HalfCheetah). However, to our knowledge planning horizons above 50 timesteps remain untested. This leads us to investigate the accuracy at very long range prediction of traditional action-conditional predictors as well as our plan-conditional predictors. 3We found it necessary to handicap the ground-truth model on HalfCheetah by adding noise to its actions during planning to prevent the planner from breaking the simulation. Without this handicap the planner was able to find nonphysical strategies and achieves expected reward of up to 150,000. 3.3 MODEL RANGE VS ACCURACY With the evidence from Section 3.2 that some tasks require planning horizons up to 150 timesteps, we now evaluate action-conditional and plan-conditional predictors on their ability to make longrange predictions in MuJoCo. Additionally we compare to an RNN which predicts one step at a time, but which is trained with backpropagation through time (BPTT) to minimize prediction error across all timesteps. This RNN is trained with a curriculum of prediction lengths ranging from 1 (at the beginning of training) to 200 (at the end). To enable direct comparisons, we employ a fixed dataset of trajectories from the environment. We generate this dataset by training a model-free PPO (Schulman et al., 2017) agent on Swimmer and recording the trajectories that it takes. This ensures that the dataset contains trajectories that involve interacting with the environment in nontrivial ways. We then split that data into a training set and a validation set and train each model to convergence on the training set. Figure 5 shows the results of evaluating these models on the validation set. While the ACP is able to make extremely accurate predictions a few steps into the future, it suffers from accumulating error when it is recursively applied many times. This suggests an explanation for the inability of the ACP models to take advantage of planning horizons longer than 60 timesteps, as discussed in Section 3.1. As the horizon increases the predictions from an ACP diverge from reality, while simultaneously the bias from using a too-short planning horizon decreases. This produces an optimal planning horizon of intermediate length given a model with error that increases as a function of depth. The RNN and the PCP are both optimized for long-term prediction accuracy and thus make much better predictions. The PCP model achieves slightly better accuracy, and does it in a fraction of the time; to predict 200 steps in the future requires 200 applications of the RNN network, but only 4 of the PCP. This performance gap is significant, as planning requires evaluating hundreds of thousands of trajectories per episode. 3.4 MODEL ACCURACY VS REWARD Figure 6 shows the reward vs prediction error for 10 models at various points during online training in the Swimmer environment. These results show a clear relationship between low prediction error of the model and high reward, reinforce the importance of having a highly accurate long-range model of the environment. Taken together with the high error for ACPs in Figure 5, this explains that the RL performance of action-conditional predictors is limited by their inability to make accurate predictions at long timescales. The left panel of Figure 6 shows a very vertical trend at the far left of the plot. We hypothesize that this is due to the changing distribution of the training data as a function of the predictor’s accuracy; once a predictor is making accurate predictions, the trajectories that it follows change from being nearly random to more focused. This then means that most of the progress late in training is on refining the predictions on a very narrow distribution of trajectories. These refinements continue to improve the RL performance of the predictor but have little impact on its accuracy along trajectories coming from a different policy. 3.5 COMPARISON TO OTHER APPROACHES In this experiment we evaluate the performance of ACP and PCP models compared to previous reinforcement learning methods, both model-free and model-based, on Swimmer-v2 and HalfCheetah-v2 from OpenAI Gym (Brockman et al., 2016)4. We also show the performance of an RNN model, which is identical to ACP but trained via backpropagation through time (BPTT) to minimize prediction error across the entire planning horizon, i.e. 100 steps for Swimmer and 20 steps for HalfCheetah. Since the PCP models predict R timesteps per network application (versus one forward pass through the network per timestep for ACP and RNN models) the PCP models are a factor of R faster to plan with in wall-clock time. Our main model-based baseline is PETS (Chua et al., 2018), a state of the art probabilistic neural network-based MBRL algorithm which has been shown to equal or exceed model-free performance on short-horizon tasks. Similar to this work, PETS uses MPC and CEM for model-based control and aggregates a dataset online. On HalfCheetah we also compare with the model-based results from Nagabandi et al. (2017), which follows the same basic formula as our work but uses random shooting to find optimal plans instead of CEM. Our model-free baseline is PPO (Schulman et al., 2017) as implemented by Kostrikov (2018), a high-performing actor-critic method. For each method we run five seeds and allow the algorithm to run to convergence, as we are interested in evaluating asymptotic performance. We train PPO for 100,000 trajectories. On Swimmer and HalfCheetah we use planning horizons of 100 and 20 timesteps respectively for the ACP, PCP, RNN, and PETS results. For each of the baseline methods we plot a horizontal line indicating the best score achieved by that method at any point in training, after averaging over the random seeds and over several consecutive episodes. We also show lines for the score achieved by using the groundtruth dynamics with the same planner as we use for PCP. In the case of Nagabandi et al. (2017) the score shown is that reported in their work; while the version of the HalfCheetah environment that they use is slightly different from the Gym one, we believe the numbers to be roughly comparable. 4We selected Swimmer and HalfCheetah to follow the main experiments from Nagabandi et al. (2017). Figure 7 shows the results of this experiment. On Swimmer, which has a very long planning horizon, PCP achieves rewards more than 50% higher than the next-best method, while on HalfCheetah it equals the performance of PPO. We speculate that the extremely high rewards achieved by Chua et al. (2018) on HalfCheetah are due in part to the difference in settings for CEM betweeen our two works; while we use 3 steps of CEM optimization with 50 candidates per step, Chua et al. (2018) use 5 steps of optimization on 500 candidates. 4 DISCUSSION In this work we considered the problem of model bias in model-based reinforcement learning. In the largely deterministic environments considered, we show that optimal planning horizons can be large, beyond 100 timesteps. Over these horizons, NN-based models trained to minimize single-step prediction accuracy do not perform well. We demonstrate that better performance is possible with NN models by changing the loss function and the form of the model. Further experiments confirm that model accuracy is crucial to end task performance. Our experiments make several simplifying assumptions, most notably the availability of unlimited samples and deterministic environment dynamics. Sample complexity would undoubtedly be improved by replacing the current overparameterized MLP architecture with something more efficient, an interesting future direction. Another important area for future work is understanding the interaction of long-range planning with stochasticity in the environment, including the development of generative models capable of predictions over long horizons. APPENDIX A PLANNING WITH A GROUND TRUTH MODEL
1. What are the novel aspects of the proposed approach in the paper? 2. How does the proposed method differ from traditional methods in terms of its output? 3. What are the potential limitations of the proposed approach, particularly regarding data requirements and applicability to realistic problems? 4. How does the reviewer assess the title of the paper and its relevance to the content? 5. Are there any related works in the semi-MDP literature that the authors should consider?
Review
Review This paper proposes learning a transition model that takes an action sequence as an input (instead of a single action), and performing model-based planning by using the cross-entropy method. One obvious concern with this is that this produces a sequence of open-loop plans, rather than a closed-loop policies, with all the inherent limitations. I could see this working well in practice in problems where anticipating how future decisions will react to state changes is not that important, however the authors should discuss the trade-offs more. A larger concern for me revolves around learning the transition model. Taking the action sequence as an input (which is one of the main novelties in the paper) is likely to require a lot of data, and maybe this is fine on relatively simple Mujoco tasks but I see it as a potential issue when trying to expand this to more realistic problems. Finally, I suggest that the authors change the title to something more descriptive of the paper’s contents, as there is no analysis of asymptotic performance in the paper (as I would have thought from the title). I also recommend that they look to see if there is any model-based work in the semi-MDP literature, which could be relevant here.
ICLR
Title Understanding the Asymptotic Performance of Model-Based RL Methods Abstract In complex simulated environments, model-based reinforcement learning methods typically lag the asymptotic performance of model-free approaches. This paper uses two MuJoCo environments to understand this gap through a series of ablation experiments designed to separate the contributions of the dynamics model and planner. These reveal the importance of long planning horizons, beyond those typically used. A dynamics model that directly predicts distant states, based on current state and a long sequence of actions, is introduced. This avoids the need for many recursions during long-range planning, and thus is able to yield more accurate state estimates. These accurate predictions allow us to uncover the relationship between model accuracy and performance, and translate to higher task reward that matches or exceeds current state-of-the-art model-free approaches. 1 INTRODUCTION Model-based reinforcement learning (MBRL) has many potential benefits over model-free approaches. These include (i) the ability to generalize to new tasks in the environment, without having to retrain; (ii) learning from off-policy data and (iii) sample efficiency. However, in simulated environments where data is plentiful, model-based approaches struggle to approach the asymptotic performance of model-free methods Nagabandi et al. (2017); Pong et al. (2018); Chua et al. (2018). Several possible explanations present themselves: the planner used for selecting optimal actions under the model might be insufficiently powerful; the model might not be able to accurately model the dynamics; or the planning horizon might not be long enough. This paper address these questions by teasing apart the different factors involved in an MBRL framework, applied to two deterministic MuJoCo environments (Todorov et al., 2012), with the aim of understanding the gap in asymptotic performance with respect to model-free approaches. In particular, we demonstrate that bias caused by short planning horizons and poor accuracy of long-term predictions is the cause of the poor performance of existing MBRL methods in the unlimited-sample regime. Our experiments show that, with a perfect dynamics model, the optimal planing horizon can be over 100 steps – much longer than typically considered in many MBRL approaches. Correspondingly, the performance is typically limited by the ability of the model to accurately predict over long-time scales, not just a few time-steps. Existing approaches to MBRL rely on a single-step dynamics model that predict the next state, given the current state and an action. As can be see in Figure 5, over long time-horizons the errors compound due to recursive application of the model, yielding inaccurate state estimates which are not useful for planning. Instead, we propose an alternate form of dynamics model that takes as input a sequence of actions along with the current state and directly predicts many time-steps into the future. This approach provides accurate prediction over long time horizons, allowing us to uncover the relationship between model accuracy and performance. This reveals that MBRL with sufficiently good learned models matches or exceeds the performance of state-of-the-art model-free methods. 1.1 RELATED WORK Non-Parametric Model-Based RL: Gaussian processes are popular approach to modeling nonlinear dynamics due to their low sample complexity and their ability to explicitly represent epistemic uncertainty. Consequently, numerous MBRL approaches use them, e.g. Kocijan et al. (2004); Ko et al. (2007); Grancharova et al. (2008); Deisenroth & Rasmussen (2011); Deisenroth et al. (2014). However via the choice of kernel they impose potentially unrealistic smoothness constraints and do not scale to large data settings, limiting their asymptotic performance in practice. Combining model-based and model-free methods: Due to the sample efficiency of model-based methods and superior asymptotic performance of model-free methods, several works have proposed to learn dynamics models using a few trajectory samples, then use those models to train or augment a model-free policy. The classic Dyna algorithm (Sutton, 1990) uses a model to extend Bellman updates multiple steps. Deisenroth & Rasmussen (2011) learns a Gaussian process model of the dynamics function and uses it to train an RBF network policy, and Gal et al. (2016) enables the model to scale to larger data by using Bayesian neural networks in place of GPs. Levine et al. (2016) fits a time-varying locally linear model around a trajectory, then trains a neural network policy to follow trajectories found by iLQR (Todorov & Li, 2005). Silver et al. (2016) learns an implicit model of the dynamics for implicit planning via value estimation; in an inversion of this technique, Pong et al. (2018) learn an explicit model of Q values for explicit planning via constrained optimization. Weber et al. (2017) learns a neural network dynamics model which is unrolled inside a policy to inform an actor-critic agent. Nagabandi et al. (2017) trains a neural network dynamics model on control tasks and uses it to take actions, then uses that model-based policy to speed the training of a model-free policy via imitation learning. These works largely seek to either (i) augment a modelfree method with a model for faster learning, or (ii) make up for the asymptotic deficiencies of a model-based method by transitioning to model-free. In this work we instead directly investigate the causes of MBRL’s poor asymptotic performance with the aim of making a transition to model-free unnecessary. MBRL with neural network models: The idea of using neural networks to enable model-based control of nonlinear systems goes back decades (Miller et al., 1990; Schmidhuber, 1990; Hunt et al., 1992; Bekey & Goldberg, 2012; Draeger et al., 1995), but until recently has only seen significant success on systems with relatively simple dynamics. Several works have endeavored to use neural network generative models of images for model-based control (Wahlström et al., 2015; Watter et al., 2015; Finn & Levine, 2017); these policies have typically used short planning horizons and struggled to equal model-free performance on complex tasks. Lenz et al. (2015) learn recurrent neural network dynamics models, then use backpropagation through time to select actions and control a robotic arm, and Henaff et al. (2017) extend this concept to both discrete and continuous action spaces. Clavera et al. (2018) combine meta-learning with MBRL using neural network models to rapidly adapt to novel environments. Srinivas et al. (2018) uses imitation learning to train a model to plan by gradient descent, which relies on an existing expert rather than learning from scratch. The closest work to ours is Chua et al. (2018). This follows a similar recipe, with similar planning and constructing a dataset online, but with different models and different goals. We use deterministic neural networks which predict many steps into the future to understand the impact of model- and horizon-bias on asymptotic performance of MBRL methods on long-horizon problems. Chua et al. (2018) uses a bootstrapped ensemble of probabilistic neural networks to improve the performance of MBRL in the few-sample regime. While that work achieves strong performance on short-horizon tasks, in our experiments we find it struggles to equal model-free methods on tasks with very long horizons. 2 APPROACH In this section we describe the models used in our experiments, the action-conditional predictor (ACP) and the novel plan-conditional predictor (PCP), which predicts the outcome of a sequence of actions with a single model step. We then detail the framework we use for planning with and training these models. 2.1 NOTATION We denote states and actions at a time t by st and at. In the environments we consider, both st and at are continuous vectors. We use H to refer to the planning horizon of an MPC policy. We refer to a sequence of actions as a plan; a plan constructed with horizon H is thus p = {a1, ..., aH}. In a set of plans {p1, ..., pn}, aij refers to the jth action of the ith plan. We consider models which predict a future state given the current state and one or more actions. We denote R to be the range of such a model, which is the number of steps this model predicts in a single application: fR(st, at, . . . , at+R−1; θ) = s̃t+R, for parameters θ. We apply the model recursively using the notation FT (st, at, . . . , aT−1; θ) = fR(. . . fR(fR(st, at, . . . , at+R−1; θ), at+R, . . . , at+2R−1; θ) . . . , aT−R, . . . , aT−1; θ) = s̃T . That is, FT () applies fR() recursively T/R times. 2.2 PLAN-CONDITIONAL PREDICTORS To test our conjecture that compounding errors limit the asymptotic performance of existing models on long-horizon RL tasks, we propose plan-conditional predictors (PCPs). A PCP takes the form fR(st, at, . . . , at+R−1) = s̃t+R for some range R > 1. If R = 1, then it reduces to the standard approach (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Henaff et al., 2017; Nagabandi et al., 2017; Chua et al., 2018) which predicts only a single time-step at a time, and which we call an action-conditional predictor (ACP). As shown in Figure 1, a PCP can predict H steps into the future using H/R recursive applications of the model instead of H applications required by an actionconditional predictor. There are many possible parameterizations that could be used in a PCP. In this work we choose deep fully-connected neural networks. There are several reasons for this: (i) the space of inputs grows exponentially with the range R, thus models with high capacity are needed to minimize model bias; (ii) since the goal is to understand asymptotic performance, a large data regime is assumed and sample complexity is a secondary issue; (iii) by predicting R steps with a single network application, they are extremely fast at planning time. An obvious alternate parameterization is recurrent neural networks (RNNs), and we provide experimental comparisons between the two approaches in Section 3. For both Swimmer and HalfCheetah, the ACP and PCP networks consist of fully-connected networks with 9 hidden layers with 1000 units each using the SELU activation function (Klambauer et al., 2017). The input state and action(s) are concatenated before being used as input to the network. The same loss function is used for training PCPs and ACPs, namely ‖s̃t+R − st+R‖22, where s is the raw MuJoCo state. We assume that the environment forms a Markov decision process (MDP) (Bellman, 1957) with deterministic dynamics, properties shared by the tasks considered in this work. These assumptions allow us to focus exclusively on the significance of model fidelity and planning horizon to MBRL, but removing them is an interesting direction for future work. 2.2.1 INTERMEDIATE PREDICTIONS For visualization purposes, in some experiments we use a variant of a plan-conditional predictor which takes as input a state and a variable number of actions R′ where 1 ≤ R′ ≤ R, the remainder of action inputs being set to zero. This allows us to plot error or render video of the PCP’s predictions at each intermediate timestep instead of only at multiples of R. Furthermore, this variant of the model allows for planning based on reward functions which operate at each timestep rather than just at the end of the episode (see Section 2.3.1). As such, this variant of the model is applicable to any environment. 2.3 SELECTING OPTIMAL ACTIONS In order to turn a predictor into a policy, we employ an off-the-shelf planning approach, namely the cross-entropy method (CEM) (Botev, 2011), to find a plan that is optimal up to some horizonH . We take the first action from that plan and then replan, a technique known as model-predictive control (MPC) or receding-horizon control (Mayne & Michalska, 1990). 2.3.1 PLANNING WITH CROSS-ENTROPY METHOD Given a predictor F , a horizon H , and a reward function r, we would like to find an optimal plan: p∗t = argmax at,...,at+H−1 H−1∑ i=0 r(s̃t+i, at+i) | s̃t+i = FH(st, at, . . . , at+i−1) For both MuJoCo environments, the reward function is dominated by the distance traveled in the xdimension at each timestep1. Thus for planning purposes we can replace the original reward function with a sparse one which provides reward equal to the x-distance traveled at the end of the episode: r̂(st) = { st[x] if t == T 0 otherwise . We then substitute t +H for T and plan based on r̂(st+H); note this is identical to using the sum of x-progress at each timestep t...t +H . This reduces the form of the optimal plan under a predictor F to p∗t = argmax at,...,at+H−1 r̂(FH(st, at, . . . , at+H−1)) The cross-entropy method starts with a set of plans p drawn from a candidate distribution C. In continuous control tasks, sampling actions independently along a trajectory results in near-zero net motion. Therefore it is common to instead use correlated action noise for exploration or trajectory sampling, e.g. an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) as in DDPG (Lillicrap et al., 2015). We define the candidate C(·) distribution by the following sampling process: a0 ∼ U(−1, 1) at+1 = min(max(N (µ = at, σ = 0.2),−1), 1) i.e. the sampled actions are clamped to be in the range ±1 which are the limits of the action space. The overall planning framework is shown in Figure 2. After drawing an initial set of N plans {p1, . . . , pN} ∼ C(·), these are passed through the predictor F to estimate rewards r1, . . . , rN , which are used to rank the plans. The top K are then passed to a 2nd round (red box). Additionally, their mean and variance are computed2 and used as parameters for a Gaussian distribution from which N −K new plans are sampled (green box). The combined set of plans are then passed to the PCP to rank them. The output from the planner is the first action from the top-ranked plan (yellow box) at the final planning round, which is executed by the agent in the environment. The top K trajectories from the final round are used to seed the initial set of plans for replanning at the next timestep, after clipping the first action from each. In practice, we use 3 rounds of planning at each timestep, i.e. two rounds of resampling (the figure omits the 3rd round for clarity). In our experiments, we use N = 50 and K = 5. 1In all experiments we evaluate a policy using the original reward function from OpenAI Gym. This simplified reward function is exclusively used inside the policy. We found in experiments using the ground-truth dynamics as a model that planning with the true reward function instead made no significant difference on these tasks. 2Independently at each timestep and action dimension; µ(p1, . . . , pn) = { 1 n ∑ i a i j | j ∈ H } and σ(p1, . . . , pn) = {√ 1 n ∑ i ( aij − µ(a1...nj ) )2 | j ∈ H}. Ac#on   executed   p1   p2   pN   p1   p2   pN   C   Round  1   Round  2   F  pK   pK   N(μ,σ)   … ..   … ..   … ..   … ..   r1   r2   rN   rK  ~   ~   F   r1   r2   rN   rK   Sort   Time     step  1   Sort   Seed  next     #mestep   Figure 2: Our off-the-shelf planner, based on the cross-entropy method (Botev, 2011). See text for details. Algorithm 1 On-policy data aggregation and training Initialize dataset D with trajectories from random policy while not converged do θ ← argminΘ E (fR(st, at...t+R−1)− st+R)2 st,at...t+R−1,st+R∼D for m = 0...M do for t = 0...T do st ← Env.get observation() pt ← argmaxat...t+H−1 r̂(FH(st, at...t+H−1; θ)) at ← head(pt) Env.execute(at). end for D← D ⋃ (s0...T , a0...T−1) end for end while 2.4 ONLINE TRAINING The planning framework described above turns the PCP model into a policy which outputs an action at each time step. To train this policy, the underlying PCP model must be updated in an online fashion. This requires a dataset of trajectories {s0...T , a0...T−1} that covers the environment’s state-action space. We follow Nagabandi et al. (2017); Chua et al. (2018) and others and collect this dataset by alternating between fitting the model to the existing data and using our planning procedure (2.3.1) to generate more trajectories from the environment. We collect trajectories from the environment for M = 100 episodes. These trajectories are added to the training set and the PCP model is updated with SGD for 10 epochs using AMSGrad (Reddi et al., 2018). The overall procedure is detailed in Algorithm 1 and is essentially the standard template for MBRL (Deisenroth & Rasmussen, 2011; Gal et al., 2016; Nagabandi et al., 2017; Chua et al., 2018). 3 EXPERIMENTS 3.1 EFFECT OF PLANNING HORIZON AND MODEL RANGE In this experiment we directly test our hypothesis that plan-conditional predictors are able to benefit from longer planning horizons than action-conditional predictors. Figure 3 shows that while an ACP model is competitive for planning horizons up to 20 timesteps, its performance falls substantially below the PCP models by 40 timesteps. The ACP model scores best at a horizon of 60 timesteps; beyond that its performance degrades as its predictions become unusably inaccurate. By contrast the PCP models shown, which need only be recursively applied between 3 and 20 times, all show monotonically increasing rewards as the planning horizon is increased. This reveals two things: the Swimmer task has a minimum optimal planning horizon of at least 100 timesteps, and the PCP models are able to predict with sufficient accuracy to be useful even over that long horizon. In the next experiments we tease apart the different factors of planning horizon, range, and accuracy that combine to produce these results. 3.2 PLANNING HORIZON VS REWARD Using a ground-truth model of the environment (that is, MuJoCo itself) in conjunction with our planner we can look at performance as a function of planning horizon for these tasks. That is, we define a new predictor MJC(st, at...t+T ) which can make predictions arbitrarily far into the future. This predictor works internally by creating a new copy env internal of the Gym environment. To make a prediction MJC(st, at...t+T ) = st+1, this predictor will call env internal.set state(st), then repeatedly call env internal.step(at+i) for i = 0...T . Its output is the observation after the final action has been executed. This ground-truth predictor can be used for planning like any other. The results of this experiment, shown in Figure 4, show that the optimal planning horizons for Swimmer and HalfCheetah are at around 150 and 40 timesteps, respectively. These results provide additional clarity to those presented in the previous experiment. Policies that use the groundtruth dynamics as their predictor perform better as the horizon increases due to the decreased bias of the long-horizon reward estimate. The approximate dynamics model from PCPs show similar gains. However, beyond a certain planning horizon the quasi-random search in the planner becomes less effective due to variance caused by the huge size of the search space, causing the reward to dip for H > 150. Previous work (Nagabandi et al., 2017; Chua et al., 2018) has demonstrated planning for 20 or 30 timesteps with a neural network dynamics model (and in particular, the recent Chua et al. (2018) achieves impressive scores on HalfCheetah). However, to our knowledge planning horizons above 50 timesteps remain untested. This leads us to investigate the accuracy at very long range prediction of traditional action-conditional predictors as well as our plan-conditional predictors. 3We found it necessary to handicap the ground-truth model on HalfCheetah by adding noise to its actions during planning to prevent the planner from breaking the simulation. Without this handicap the planner was able to find nonphysical strategies and achieves expected reward of up to 150,000. 3.3 MODEL RANGE VS ACCURACY With the evidence from Section 3.2 that some tasks require planning horizons up to 150 timesteps, we now evaluate action-conditional and plan-conditional predictors on their ability to make longrange predictions in MuJoCo. Additionally we compare to an RNN which predicts one step at a time, but which is trained with backpropagation through time (BPTT) to minimize prediction error across all timesteps. This RNN is trained with a curriculum of prediction lengths ranging from 1 (at the beginning of training) to 200 (at the end). To enable direct comparisons, we employ a fixed dataset of trajectories from the environment. We generate this dataset by training a model-free PPO (Schulman et al., 2017) agent on Swimmer and recording the trajectories that it takes. This ensures that the dataset contains trajectories that involve interacting with the environment in nontrivial ways. We then split that data into a training set and a validation set and train each model to convergence on the training set. Figure 5 shows the results of evaluating these models on the validation set. While the ACP is able to make extremely accurate predictions a few steps into the future, it suffers from accumulating error when it is recursively applied many times. This suggests an explanation for the inability of the ACP models to take advantage of planning horizons longer than 60 timesteps, as discussed in Section 3.1. As the horizon increases the predictions from an ACP diverge from reality, while simultaneously the bias from using a too-short planning horizon decreases. This produces an optimal planning horizon of intermediate length given a model with error that increases as a function of depth. The RNN and the PCP are both optimized for long-term prediction accuracy and thus make much better predictions. The PCP model achieves slightly better accuracy, and does it in a fraction of the time; to predict 200 steps in the future requires 200 applications of the RNN network, but only 4 of the PCP. This performance gap is significant, as planning requires evaluating hundreds of thousands of trajectories per episode. 3.4 MODEL ACCURACY VS REWARD Figure 6 shows the reward vs prediction error for 10 models at various points during online training in the Swimmer environment. These results show a clear relationship between low prediction error of the model and high reward, reinforce the importance of having a highly accurate long-range model of the environment. Taken together with the high error for ACPs in Figure 5, this explains that the RL performance of action-conditional predictors is limited by their inability to make accurate predictions at long timescales. The left panel of Figure 6 shows a very vertical trend at the far left of the plot. We hypothesize that this is due to the changing distribution of the training data as a function of the predictor’s accuracy; once a predictor is making accurate predictions, the trajectories that it follows change from being nearly random to more focused. This then means that most of the progress late in training is on refining the predictions on a very narrow distribution of trajectories. These refinements continue to improve the RL performance of the predictor but have little impact on its accuracy along trajectories coming from a different policy. 3.5 COMPARISON TO OTHER APPROACHES In this experiment we evaluate the performance of ACP and PCP models compared to previous reinforcement learning methods, both model-free and model-based, on Swimmer-v2 and HalfCheetah-v2 from OpenAI Gym (Brockman et al., 2016)4. We also show the performance of an RNN model, which is identical to ACP but trained via backpropagation through time (BPTT) to minimize prediction error across the entire planning horizon, i.e. 100 steps for Swimmer and 20 steps for HalfCheetah. Since the PCP models predict R timesteps per network application (versus one forward pass through the network per timestep for ACP and RNN models) the PCP models are a factor of R faster to plan with in wall-clock time. Our main model-based baseline is PETS (Chua et al., 2018), a state of the art probabilistic neural network-based MBRL algorithm which has been shown to equal or exceed model-free performance on short-horizon tasks. Similar to this work, PETS uses MPC and CEM for model-based control and aggregates a dataset online. On HalfCheetah we also compare with the model-based results from Nagabandi et al. (2017), which follows the same basic formula as our work but uses random shooting to find optimal plans instead of CEM. Our model-free baseline is PPO (Schulman et al., 2017) as implemented by Kostrikov (2018), a high-performing actor-critic method. For each method we run five seeds and allow the algorithm to run to convergence, as we are interested in evaluating asymptotic performance. We train PPO for 100,000 trajectories. On Swimmer and HalfCheetah we use planning horizons of 100 and 20 timesteps respectively for the ACP, PCP, RNN, and PETS results. For each of the baseline methods we plot a horizontal line indicating the best score achieved by that method at any point in training, after averaging over the random seeds and over several consecutive episodes. We also show lines for the score achieved by using the groundtruth dynamics with the same planner as we use for PCP. In the case of Nagabandi et al. (2017) the score shown is that reported in their work; while the version of the HalfCheetah environment that they use is slightly different from the Gym one, we believe the numbers to be roughly comparable. 4We selected Swimmer and HalfCheetah to follow the main experiments from Nagabandi et al. (2017). Figure 7 shows the results of this experiment. On Swimmer, which has a very long planning horizon, PCP achieves rewards more than 50% higher than the next-best method, while on HalfCheetah it equals the performance of PPO. We speculate that the extremely high rewards achieved by Chua et al. (2018) on HalfCheetah are due in part to the difference in settings for CEM betweeen our two works; while we use 3 steps of CEM optimization with 50 candidates per step, Chua et al. (2018) use 5 steps of optimization on 500 candidates. 4 DISCUSSION In this work we considered the problem of model bias in model-based reinforcement learning. In the largely deterministic environments considered, we show that optimal planning horizons can be large, beyond 100 timesteps. Over these horizons, NN-based models trained to minimize single-step prediction accuracy do not perform well. We demonstrate that better performance is possible with NN models by changing the loss function and the form of the model. Further experiments confirm that model accuracy is crucial to end task performance. Our experiments make several simplifying assumptions, most notably the availability of unlimited samples and deterministic environment dynamics. Sample complexity would undoubtedly be improved by replacing the current overparameterized MLP architecture with something more efficient, an interesting future direction. Another important area for future work is understanding the interaction of long-range planning with stochasticity in the environment, including the development of generative models capable of predictions over long horizons. APPENDIX A PLANNING WITH A GROUND TRUTH MODEL
1. What is the focus of the paper in terms of control theory? 2. What is the main contribution of the paper, particularly in regards to prediction models? 3. Are there any limitations or weaknesses in the paper's approach or methodology? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any additional insights or perspectives that the reviewer would like the authors to provide?
Review
Review This paper study the model-based approach in deterministic low dimensional continuous control. As far as I am concerned and I understood, the main contribution of this paper is in substituting one-step-ahead prediction model with a multiple-step prediction model, resulting in a more accurate prediction model. I was not able to find points beyond this. I would be happy if the authors could clarify it.
ICLR
Title Generaling Multimodal Variational Methods to Sets Abstract Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-ofexperts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-ofthe-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/. 1 INTRODUCTION Most real-life applications such as robotic systems, social media mining, and recommendation systems naturally contain multiple data sources, which raise the need for learning co-representation among diverse modalities Lee et al. (2020). Making use of additional modalities should improve the general performance of downstream tasks as it can provide more information from another perspective. In literatures, substantial improvements can be achieved by utilizing another modality as supplementary information Asano et al. (2020); Nagrani et al. (2020) or by multimodal fusion Atrey et al. (2010); Hori et al. (2017); Zhang et al. (2021). However, current multimodal research suffers severely from the lack of multimodal data with fine-grained labeling and alignment Sun et al. (2017); Beyer et al. (2020); Rahate et al. (2022); Baltrušaitis et al. (2018) and the missing of modalities Ma et al. (2021); Chen et al. (2021). In the self-supervised and weakly-supervised learning field, the variational autoencoders (VAEs) for multimodal data Kingma & Welling (2013); Wu & Goodman (2018); Shi et al. (2019); Sutter et al. (2021) have been a dominating branch of development. VAEs are generative self-supervised models by definition that capture the dependency between an unobserved latent variable and the input observation. To jointly infer the latent representation and reconstruct the observations properly, the multimodal VAEs are required to extract both modality-specific and modality-invariant features from the multimodal observations. Earlier works mainly suffer from scalability issues as they need to learn a separate model for each modal combination Pandey & Dukkipati (2017); Yan et al. (2016). More recent multimodal VAEs handle this issue and achieves scalability by approximating the true joint posterior distribution with the mixture or the product of uni-modality inference models Shi et al. (2019); Wu & Goodman (2018); Sutter et al. (2021). However, our key insight is that their methods suffer from two critical drawbacks: 1) The implied conditional independence assumption and corresponding factorization deviate their VAEs from modeling inter-modality correlations. 2) The aggregation of inference results from uni-modality is by no means a co-representation of these modalities. To overcome these drawbacks of previous VAE methods, this work proposes the Set Multimodal Variational Autoencoder (SMVAE), a novel multimodel generative model eschewing factorization and instead relying solely upon set operation to achieve scalability. The SMVAE allows for better performance compared to the latest multimodal VAE methods and can handle input modalities of variable numbers and permutations. By learning the actual multimodal joint posterior directly, the SMVAE is the first multimodal VAE method that achieves scalable co-representation with missing modalities. A high-level overview of the proposed method is illustrated in Fig.1. The SMVAE can handle a set of maximally M modalities as well as their subsets and allows cross-modality generations. Ei and Di represent the i−th embedding network and decoder network for the specific modality. µs, σs and µk, σk represent the parameters for the posterior distribution of the latent variable. By incorporating set operation when learning the joint-modality posterior, we can simply drop the corresponding embedding networks when a modality is missing. Comprehensive experiments show the proposed Set Multimodal Variational Autoencoder (SMVAE) outperforms state-of-the-art multimodal VAE methods and is immediately applicable to real-life multimodality. 2 RELATED WORK 2.1 MULTIMODALITY VAES The core problem of learning a multimodal generative model is to maintain the model’s scalability to the exponential number of modal combinations. Existing multimodal generative models such as Conditional VAE (CVAE)Pandey & Dukkipati (2017) and joint-modality VAE (JMVAE) Suzuki et al. (2016) had difficulty scaling since they need to assign a separate inference model for each possible input and output combinations. To tackle this issue, follow-up works, such as, TELBO Vedantam et al. (2017), MVAE Wu & Goodman (2018), MMVAE Shi et al. (2019), MoPoE Sutter et al. (2021), assume the variational approximation is factorizable. Thus, they focused on factorizing the approximation of the multimodal joint posterior q(z∣x1,⋯,xM) into a set of uni-modality inference encoders qi(z∣xi), such that q(z∣x1,⋯,xM) ≈ F ({xi}Mi=1), where F (⋅) is a product or mean operation, depending on the chosen aggregation method. As discussed in Sutter et al. (2021), these scalable multimodal VAE methods differ only in the choice of aggregation method. Different from those mentioned above multimodal VAE methods, we attain the joint posterior in its original form without introducing additional assumptions on the form of the joint posterior. To handle the issue of scalability, we exploit the deterministic set operation function in the noise-outsourcing process. While existing multimodal VAE methods can be viewed as typical late fusion method that combines decisions about the latent variables Khaleghi et al. (2013), the proposed SMVAE method corresponds to the early fusion method at the representation level, allowing for the learning of correlation and co-representation from multimodal data. 2.2 METHODS FOR SET-INPUT PROBLEMS Multiple instance learning (MIL) Carbonneau et al. (2018) and 3D shape recognition Su et al. (2015); Hofer et al. (2005); Wu et al. (2015), are well-known examples of weakly-supervised learning problems that deal with set-input. MIL handles training data as numerous sets of instances with only set-level labels. A typical way to solve set-level classification problems is to use pooling methods for information aggregation Shao et al. (2021). Recently, Lee et al. (2019) observed that classical feed-forward neural networks like the multi-layer perception (MLP) Murtagh (1991) cannot guarantee invariance under the permutation of the elements in the input as well as the input of arbitrary sizes. Furthermore, recursive neural networks such as RNN and LSTM Hochreiter & Schmidhuber (1997) are sensitive to the order of the input sequences, and cannot fit the multimodal case since there is no natural order for modalities. Recently, Deep Sets Zaheer et al. (2017) provided a formal definition for a permutation invariant function in set-input problems and proposed a universal approximator for arbitrary set functions. Later on, Set Transformer Lee et al. (2019) further extends this idea by using the self-attention mechanism to provide interactions as well as information aggregation among elements from an input set. However, their method only models a set of outputs as a deterministic function. Our work fills the gap between a deterministic set function to a probabilistic distribution and applies it to multimodal unsupervised learning. 3 PROPOSED METHOD 3.1 PRELIMINARIES This work considers the multimodal learning problem as a set modeling problem and presents a scalable method for learning multimodal latent variables and cross-modality generation. Given a dataset {X(i)}Ni=1 of N i.i.d. multimodal samples, we consider each of the sample as a set of M modalities observations X(i) = {x(i)j } M j=1. The multimodal data is assumed to be generated following the successive random process p(X, z) = pθ(X∣z)p(z) which involves an unobserved latent variable z. The prior distribution of the latent variable z is assumed to be pθ(z), with θ denoting its parameters. The marginal log-likelihood of this dataset of multimodal sets can be expressed as a summation of marginal log-likelihood of individual sets as log p(X(i)) as log∏Ni=1 p(X(i)) = ∑Ni=1 log p(X(i)). Since the marginal likelihood of the dataset is intractable, we cannot optimize p({X(i)}Ni=1) with regards to θ directly. We instead introduce the variational approximation qϕ(z∣X) from a parametric family, parameterized by ϕ, as an importance distribution. qϕ(z∣X) is often parameterized by a neural network with ϕ as its trainable parameters. Together, we can express the marginal log-likelihood of a single multimodal set as: log p(X(i)) = DKL(qϕ(z∣X(i))∣∣pθ(z∣X(i))) + L(ϕ, θ;X(i)) L(ϕ, θ;X(i)) = Ez∼qϕ(z∣X(i)) [log pθ(X (i)), z) − log qϕ(z∣X(i))] = −DKL(qϕ(z∣X(i))∣∣pθ(z)) + Ez∼qϕ(z∣X (i)) [log pθ(X(i)∣z)] (1) , where DKL(⋅∣∣⋅) is the Kullback-Leibler (KL) divergence between two distributions. The nonnegative property of the KL divergence term between the variational approximation qϕ(z∣X(i)) and the true posterior pθ(z∣X(i)) in the first line makes L(ϕ, θ;X(i)) the natural evidence lower bound (ELBO) for the marginal log-likelihood. The last line indicates that maximizing the ELBO is equivalent to maximizing the reconstruction performance and regulating the variational approximation using the assumed prior distribution for the latent variable. To avoid confusion, we term neural networks used for mapping the raw input observations into a fixed-sized feature vector as the embedding network while the neural network used to parameterize the variational approximation qϕ(z∣X(i)) as the encoder network. A frequently used version of the objective function is written as: argmin ϕ − βDKL(qϕ(z∣X(i))∣∣p(z)) + Ez∼qϕ(z∣X(i)) [λ log p(X (i)∣z)] (2) , where additional annealing coefficients β and reweighting coefficient λ are used in the ELBO to allow gradients and warm-up training which gradually increases the regularization effect from the prior distribution and avoids reaching local minima in the early training stage Bowman et al. (2015); Sønderby et al. (2016). We drop the superscript of X(i) to maintain brevity in the following paper. 3.2 SET MULTIMODAL VARIATIONAL AUTOENCODER In multimodal scenarios with missing modalities, we consider each sample Xs = {xi∣ithmodaltiy present} as a subset of X and the powerset P(X) denoting all the 2M combinations, such that Xs ∈ P(X). Our goal is to perform inference and generation from any number and permutation of available modalities, which requires an inference process is invariant to permutations and input of variable size. Following Definition 1, we denotes the invariant inference process as p(z∣Xs) = p(z∣π⋅Xs). The ELBO for a subset Xs can be written as Eq.3. Ls(ϕ, θ;Xs) = −DKL(qϕ(z∣Xs)∣∣pθ(z)) + Ez∼qϕ(z∣Xs) [log pθ(Xs∣z)] (3) Definition 1 Let Sn be a set of all permutations of indices 1,⋯, N , X = (x1,⋯xn) denotes n random variables. A probabilistic distribution p(y∣X) is permutation inariant if and only if for any permutation π ∈ Sn, p(y∣X) = p(y∣π⋅X), where ⋅ is the group action. The difference between L(ϕ, θ;X) in Eq.1 and Ls(ϕ, θ;Xs) in Eq.3 is that the ELBO for a subset Xs is not yet a valid bound for log p(X) by itself. Additional sampling from P(X) in the optimization objective as Eq.4 is needed for theoretical completeness. argmin ϕ ∑ Xs∼P(X) π∈Sn Ls(ϕ, θ;π⋅Xs) (4) , where π is a randomly generated permutation to the input subset Xs. However, this sampling process can be trivial if we combine the sampling of the subsets with the sampling of mini-batch during training. By assuming the Gaussian form of the latent variable z and applying the reparameterization technique, the inference process of SMVAE can be written as: p(z∣xs) ∼ N (µ, σ2), ϵ ∼ N (0, I) (5) z ∶= µ + σ ⊙ ϵ (6) µz, log σ 2 z ∶= gϕ(E1(x1),⋯, Em(xm)) (7) , where Ei are embedding network for the i th modality, gϕ(⋅) is a neural network with trainable parameters ϕ that provide the parameter for the latent’s posterior distribution (i.e., µ and σ) , ⊙ denotes the element-wise multiplication. For the generation process, it is desired to models the joint likelihood of modalities conditioned on the latent variables pθ(xs, z) = p(z)pθ(xs∣z) so that the model can utilize information from other available modalities more easier when generating a complex modality. However, for the sake of easy implementation, we assign n separate decoders D1,⋯, DM for all possible modalities as pθ(xs∣z) = [Dθ1(z),⋯, DθM (z)]. We find empirically that, without loss of generality, using L2−normalization as additional regularization to regulate the parameter oµ and σ of the inference network to 0 and 1 respectively could facilitate the learning efficiency because the gradient from the ELBO often favors the reconstruction term over the regularization term. 3.3 SET REPRESENTATION FOR JOINT DISTRIBUTION The scalability issue comes from the requirement for an inference process for the powerset P(X). We achieve scalability by using the noise-outsourced functional representation, i.e. z = g(ϵ,Xs), to bridge the gap between the deterministic set functions to a stochastic function. The properties of the deterministic function thus can be passed to the stochastic distribution under minor conditions Bloem-Reddy & Teh (2020). With such a foundation, the problem of modeling the posterior for a superset immediately reduces to designing a differentiable deterministic function that has the desired invariant or elastic properties. Specifically, we identify four critical requirements for weaklysupervised multimodal learning. Being that the model should 1) be scalable in the number of observable modalities; 2) be able to process input modalities sets of arbitrary size and permutation; 3) satisfy Theorem 1; and 4) be able to learn the co-representation among all modalities. Theorem 1 A valid set function f(x) is invariant to the permutation of instances, iif it can be decomposed in the form Φ(∑Ψ(x)), for any suitable transformations Φ and Ψ. An oversimplified example of a set function can be summation or product as done in MVAE Wu & Goodman (2018) and MMVAE Shi et al. (2019). Pooling operations such as average pooling or max pooling also fit the definition. However, these set aggregation operations will require additional factorization assumptions to the joint posterior and ultimately forbid the VAE to learn corepresentation of the input modalities as aggregation is only applied at the decision level. To establish the inductive bias of inter-modality correlation, the self-attention mechanism without positional embeddings is a reasonable choice Edelman et al. (2022); Shvetsova et al. (2022). Therefore, the proposed SMVAE leverages self-attention as the deterministic set function to aggregate embeddings of multimodal inputs. Given the query Q, key K and value V , an attention function is denoted as Att(Q,K, V ) = ω(QK T √ dk )V , where K ∈ Rm×dk and V ∈ Rm×dv are m vectors of dimension dk and dv , Q ∈ R n×dq are n vectors of dimension dq , ω is the softmax activation function. In our case, the key-value pairs represent the m available embeddings of input modalities, m ≤ M . Each embedding is mapped to a d−dimensional embedding space by a modality-specific embedding network. By measuring the compatibility of the corresponding key and the query Q, information that is shared among modalities is aggregated as co-representation. In practice, we utilize the multi-head extension of self-attention denoted as MultiHead(Q,K, V, h) = Concat(A1,⋯, Ah)W o, where Ai = Atti(QWQi ,KW K i , V W v i ) is obtained from the ith attention function with projection parameters WQi ∈ R (d/h)×dq ,WKi ∈ R (d/h)×dk , WVi ∈ R (d/h)×dk and W o ∈ Rdv×d, h denotes the total number of attention heads and d denotes the dimension of the projections for keys, values and queries. Inspired by Lee et al. (2019), we design our deterministic set representation function gϕ(Xs) as follows: gϕ(Xs) ∶= H + fs(H) H = I +MultiHead(I,Xs,Xs, h) (8) , where I ∈ R1×dv is an dv-dimensional trainable vector as the query vector for multimodal embeddings. fs is a fully-connected layer. By calculating attention weights using I and each embedding. Not only does I work as an aggregation vector that regulates the number of output vectors from gϕ(Xs) to be constant regardless of the number of input embeddings, but also it selects relevant information from each embedding base on similarity measurement. The former justifies gϕ(Xs) as a suitable permutation invariant set-processing function while the latter yields the desired co-representation among modalities. Finally, Since the set representation function gϕ(Xs) is invariant to the input permutations of different input sizes, we achieved an invariant inference probabilistic function that satisfies Definition 1 through the noise-outsourced process as shown in Eq. 6. Thus, by introducing the set representation function in the noise-outsourced process, the SMVAE is readily a scalable multimodal model for any subsets of modalities. 3.4 TOTAL CORRELATION OPTIMIZATION WITHOUT CONDITION INDEPENDENCE The lower bound of the multimodal data without factorizing the joint posterior (i.e., Eq. 1) provides additional information about the correlations of modalities during the optimization process compared to factorized methods. It is noteworthy that both MVAE and MMVAE depend on the assumption of conditional independence between modalities in factorization. Without loss of generality, the relation between L(ϕ, θ;X) and the factorized case LCI can be shown in Eq. 9. L(ϕ, θ;X) = Eqϕ(z∣X) [log pθ(z)∏Mi=1 pθ(xi ∣ z) qϕ(z ∣ X) + log pθ(X, z) pθ(z)∏Mi=1 p(xi ∣ z) ] = LCI+ Eqϕ(z∣X) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] (9) , where X ≡ (x1,⋯,xM) and LCI is the lower bound for factorizable generative process as MVAE or MMVAE. Specifically, let q(X) denotes the empirical distribution for the multimodal dataset, we have: Eq(X) [L(ϕ, θ;X)] = Eq(X) [LCI] + Ez∼ q(X)qϕ(z∣X) pθ (X∣z) ⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ EX∼pθ(X∣z) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ conditional total correlation ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (10) , which reveals that without assuming a factorizable generative process and enforcing conditional independence among modalities, our optimization objective naturally models the conditional total correlation which provides information of dependency among multiple input modalities Watanabe (1960); Studenỳ & Vejnarová (1998). Therefore, the SMVAE has the additional advantage of learning correlations among different modalities of the same event, which is also what we desired for good co-representation. 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS We make use of uni-modal datasets including MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017) and CelebA Liu et al. (2015) to evaluate the performance of the proposed SMVAE and compare with other state-of-the-art methods. We convert these uni-modal datasets into bi-modal dataset by transforming the labels to one-hot vectors as the second modality as in Wu & Goodman (2018); Suzuki et al. (2016). For quatitative evaluation, we denote x1 and x2 as the image and text modality and measure the marginal log-likelihood, log p(x) ≈ logEq(z∣⋅)[p(x∣z)p(z)q(z∣⋅) ], the joint likelihood log p(x,y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ], and the marginal conditional probability, log p(x∣y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ]−logEp(z)[p(y∣z)], using data samples from the test set. q(z∣⋅) denotes the importance distribution. For all the multimodal VAE methods, we keep the architecture of encoders and decoders consistent for a fair comparison. Detailed training configurations and settings of the networks are listed in Appendix. B. The marginal probabilities measure the model’s ability to capture data distributions while the conditional log probability measures classification performance. Higher scoring of these matrics means the better a model is able to generate proper samples and convert between modalities. These are the desirable properties for learning a generative model. 4.2 GENERATION QUALITY AND QUANTITATIVE EVALUATION We obtain 1000 important samples to estimate the probability metrics. Table 1 shows the quatitative results of the proposed SMVAE for each dataset. We can see that the SMVAE outperforms other methods in almost all metrics. The outstanding of SMVAE mainly contributes to the direct modeling of the joint posterior distribution and optimization on a more informative objective. Fig. 3, Fig. 4 and Fig. 5 show cross-modality generation of image samples for each domain generated by the SMVAE model. We can see that given the text modality only, the SMVAE can generate corresponding images of good quality. We further visualize the learned latent representation using tSNE Hinton & Roweis (2002). As shown in Fig.2, latent space learned by MVAE method can only produce cohesive latent representation when both modalities are presented. When one modality is missing, representations from their method are distributed irrespective of the semantic category of the data. On the other hand, although the MMVAE method achieves cohesive representation for single-modality posterior, their joint representation is less discriminative. Indicating that using only the combination of uni-modal inference networks is insufficient to capture intermodality co-representation. Nonetheless, our SMVAE method can achieve discriminative latent space for both single- and joint-modality inputs thanks to its ability to exploit shared information from different modalities. 4.3 CASE STUDY: COMPUTER VISION APPLICATION We demonstrate that our SMVAE is able to learn image transformations including colorization, edge detection, facial landmark segmentation, image completion, and watermark removal. With original image and each transformation as different modalities, we obtain 6 modalities in total by applying different transformations to the ground-truth images for this multimodal setting. This case study demonstrates the SMVAE’s ability to generate in multiple directions and combinations. Similar to Wu & Goodman (2018), for edge detection, we use Canny detector Canny (1986) from Scikit-Image module Van der Walt et al. (2014) to extract edges of the facial image. For facial landmark segmentation, we use Dlib tool King (2009) and OpenCV Bradski & Kaehler (2000). For colorization, we simply convert RGB colors to grayscale. For watermark removal, we add a watermark overlay to the original image. For image completion, we replace half of the image with black pixels. Fig.6 shows the samples generated from a trained SMVAE model. As can be seen in Fig.6(a), the SMVAE generates a good reconstruction of the facial landmark segmentation and extracted edges. In Fig.6(b), we can see that the SMVAE is able to put reasonable facial color to the input grayscale image. Fig.6(c) demonstrates that the SMVAE can recover the image from the watermark and complete the image quite well. The reconstructed right half of the image is basically agreed on the left half of the original image. In Fig.6(d), all traces of the watermark is also removed. Although our reconstructed images suffer from the same blurriness problem that is shared in VAE methods Zhao et al. (2017), the SMVAE is able to perform cross-modality generation thanks to its ability to capture share information among modalities. 4.4 CASE STUDY: ROBOTICS CONTROL APPLICATION The second case study shows that our method is readily applicable in robotics control scenarios using Vision&Touch datasetLiang et al. (2021). We use the SMVAE to learn cross-modality generation from continuous sensory input to images. Emerging human-in-the-loop shared autonomy systems are often equipped with multiple sensors, which pose a high requirement to the model’s ability to learn co-representationLee et al. (2020); Luo et al. (2021); Chen et al. (2021); Selvaggio et al. (2021); Newman et al. (2022); Li et al. (2021). The Vision&Touch dataset is a real-world robot manipulation dataset that contains visual, tactile, control action, and robot proprioception data which pocess more diverse modalities. The robotic arm attempts to insert the peg located on its tip into the target object. We use a total of 4 modalities including the depth images, RGB images, the 6-axis force sensor feedbacks, and the control action given to the robotics arm in each time step. Fig. 7(a) illustrates that as the robotic arm is not receiving force signals in early steps, reconstruction results of the RGB image show clearly that the arm has no contact with the taget box below. Only when the robotic arm is receiving high force readings, the generated image depicts the contact between the robotics arm and the target box. The quality of the reconstructed rgb and depth images is also differ between partial observation and full observation. While only limited information is observed (i.e., force and action inputs), our method is only able to reconstruct rgb and depth images that can properly reflex the relative posistion between the robotic arm and the target object (Fig. 7(a)). But when more information is presented, the latent variables can have more comprehensive information about the event and better reconstruction result as we removed the conditional independence assumption (Fig. 7(b)). 5 CONCLUSION This paper proposes a multimodal generative model by incorporating the set representation learning in the VAE framework. Unlike the previous multimodal VAE methods, the proposed SMVAE method provides a scalable solution for multimodal data of variable size and permutations. Critically, our model learns the joint posterior distribution directly without additional assumptions for factorization, yielding a more informative objective and the ability to achieve co-representation between modalities. Statistical and visualization results demonstrate that our method excels with other state-of-the-art multimodal VAE methods. Which has high potential in emerging multimodal tasks that need to learn co-representation of diverse data sources while taking missing modality problems or set-input processing problems into consideration. Application on cross-modality reconstruction in robotic dataset further indicates the proposed SMVAE has high potential in emerging multimodal tasks. In the future, we will explore methods that extend the current SMVAE framework to more diverse modalities as well as dynamic multimodal sequences to provide solutions for real-world multimodal applications.
1. What is the focus and contribution of the paper on multimodal VAE? 2. What are the strengths of the proposed approach, particularly in addressing scalability issues? 3. What are the weaknesses of the paper, especially regarding computational overhead and missing details? 4. Do you have any concerns or suggestions regarding the evaluation metrics used in the experiments? 5. Are there any typos or minor issues in the paper that should be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studied multimodal VAE. Compared to existing methods that rely on the conditional independence assumption, this paper relaxes such assumption by introducing Set Multimodal VAE (SMVAE). The author conducted experiments on three multimodal datasets to verify the effectiveness of SMVAE. Strengths And Weaknesses Strength: This paper aims to address the scalability issues, which are well-motivated and critical in multimodal representation learning (MRL). The proposed method is technically feasible with intensive theoretical justification. Extensive experiments on multiple datasets empirically validate the performance of proposed techniques, especially by comparing them with existing related works. Weaknesses: Incremental technical contribution. Leveraging self-attention to model cross-modal interaction is a well-studied idea in MRL empirically and literally. Computation overhead. In high-modality cases (e.g., >10 modalities), calculating self-attention is computationally prohibitive. It seems that the proposed method does not take into account this computational issue, which significantly compromises the claimed addressing of the scalability issues. Missing details. What is the function of I (the query vector) in Eqn.8? Is it to obtain the mean and variance of the joint distribution? It is a common practice to add modality-type embedding to the input data. Does your method follow the same practice? What is the purpose of adding noise to variance (Eqn.6)? What is the function of I (the query vector) in Eqn.8? Is it to obtain the mean and variance of the joint distribution? How many self-attention layers are used in your method? Clarity, Quality, Novelty And Reproducibility Minor questions: Evaluation metrics. In Table 1, the quantitative results are not readily interpretable. Instead, [1] proposes several metrics which are more friendly. The authors are suggested to use the new evaluation metrics for easy reading. Typos. Page 4 (3.2, last paragraph): “... regulate the parameter 0\mu … ” → “... regulate the parameter \mu …”
ICLR
Title Generaling Multimodal Variational Methods to Sets Abstract Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-ofexperts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-ofthe-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/. 1 INTRODUCTION Most real-life applications such as robotic systems, social media mining, and recommendation systems naturally contain multiple data sources, which raise the need for learning co-representation among diverse modalities Lee et al. (2020). Making use of additional modalities should improve the general performance of downstream tasks as it can provide more information from another perspective. In literatures, substantial improvements can be achieved by utilizing another modality as supplementary information Asano et al. (2020); Nagrani et al. (2020) or by multimodal fusion Atrey et al. (2010); Hori et al. (2017); Zhang et al. (2021). However, current multimodal research suffers severely from the lack of multimodal data with fine-grained labeling and alignment Sun et al. (2017); Beyer et al. (2020); Rahate et al. (2022); Baltrušaitis et al. (2018) and the missing of modalities Ma et al. (2021); Chen et al. (2021). In the self-supervised and weakly-supervised learning field, the variational autoencoders (VAEs) for multimodal data Kingma & Welling (2013); Wu & Goodman (2018); Shi et al. (2019); Sutter et al. (2021) have been a dominating branch of development. VAEs are generative self-supervised models by definition that capture the dependency between an unobserved latent variable and the input observation. To jointly infer the latent representation and reconstruct the observations properly, the multimodal VAEs are required to extract both modality-specific and modality-invariant features from the multimodal observations. Earlier works mainly suffer from scalability issues as they need to learn a separate model for each modal combination Pandey & Dukkipati (2017); Yan et al. (2016). More recent multimodal VAEs handle this issue and achieves scalability by approximating the true joint posterior distribution with the mixture or the product of uni-modality inference models Shi et al. (2019); Wu & Goodman (2018); Sutter et al. (2021). However, our key insight is that their methods suffer from two critical drawbacks: 1) The implied conditional independence assumption and corresponding factorization deviate their VAEs from modeling inter-modality correlations. 2) The aggregation of inference results from uni-modality is by no means a co-representation of these modalities. To overcome these drawbacks of previous VAE methods, this work proposes the Set Multimodal Variational Autoencoder (SMVAE), a novel multimodel generative model eschewing factorization and instead relying solely upon set operation to achieve scalability. The SMVAE allows for better performance compared to the latest multimodal VAE methods and can handle input modalities of variable numbers and permutations. By learning the actual multimodal joint posterior directly, the SMVAE is the first multimodal VAE method that achieves scalable co-representation with missing modalities. A high-level overview of the proposed method is illustrated in Fig.1. The SMVAE can handle a set of maximally M modalities as well as their subsets and allows cross-modality generations. Ei and Di represent the i−th embedding network and decoder network for the specific modality. µs, σs and µk, σk represent the parameters for the posterior distribution of the latent variable. By incorporating set operation when learning the joint-modality posterior, we can simply drop the corresponding embedding networks when a modality is missing. Comprehensive experiments show the proposed Set Multimodal Variational Autoencoder (SMVAE) outperforms state-of-the-art multimodal VAE methods and is immediately applicable to real-life multimodality. 2 RELATED WORK 2.1 MULTIMODALITY VAES The core problem of learning a multimodal generative model is to maintain the model’s scalability to the exponential number of modal combinations. Existing multimodal generative models such as Conditional VAE (CVAE)Pandey & Dukkipati (2017) and joint-modality VAE (JMVAE) Suzuki et al. (2016) had difficulty scaling since they need to assign a separate inference model for each possible input and output combinations. To tackle this issue, follow-up works, such as, TELBO Vedantam et al. (2017), MVAE Wu & Goodman (2018), MMVAE Shi et al. (2019), MoPoE Sutter et al. (2021), assume the variational approximation is factorizable. Thus, they focused on factorizing the approximation of the multimodal joint posterior q(z∣x1,⋯,xM) into a set of uni-modality inference encoders qi(z∣xi), such that q(z∣x1,⋯,xM) ≈ F ({xi}Mi=1), where F (⋅) is a product or mean operation, depending on the chosen aggregation method. As discussed in Sutter et al. (2021), these scalable multimodal VAE methods differ only in the choice of aggregation method. Different from those mentioned above multimodal VAE methods, we attain the joint posterior in its original form without introducing additional assumptions on the form of the joint posterior. To handle the issue of scalability, we exploit the deterministic set operation function in the noise-outsourcing process. While existing multimodal VAE methods can be viewed as typical late fusion method that combines decisions about the latent variables Khaleghi et al. (2013), the proposed SMVAE method corresponds to the early fusion method at the representation level, allowing for the learning of correlation and co-representation from multimodal data. 2.2 METHODS FOR SET-INPUT PROBLEMS Multiple instance learning (MIL) Carbonneau et al. (2018) and 3D shape recognition Su et al. (2015); Hofer et al. (2005); Wu et al. (2015), are well-known examples of weakly-supervised learning problems that deal with set-input. MIL handles training data as numerous sets of instances with only set-level labels. A typical way to solve set-level classification problems is to use pooling methods for information aggregation Shao et al. (2021). Recently, Lee et al. (2019) observed that classical feed-forward neural networks like the multi-layer perception (MLP) Murtagh (1991) cannot guarantee invariance under the permutation of the elements in the input as well as the input of arbitrary sizes. Furthermore, recursive neural networks such as RNN and LSTM Hochreiter & Schmidhuber (1997) are sensitive to the order of the input sequences, and cannot fit the multimodal case since there is no natural order for modalities. Recently, Deep Sets Zaheer et al. (2017) provided a formal definition for a permutation invariant function in set-input problems and proposed a universal approximator for arbitrary set functions. Later on, Set Transformer Lee et al. (2019) further extends this idea by using the self-attention mechanism to provide interactions as well as information aggregation among elements from an input set. However, their method only models a set of outputs as a deterministic function. Our work fills the gap between a deterministic set function to a probabilistic distribution and applies it to multimodal unsupervised learning. 3 PROPOSED METHOD 3.1 PRELIMINARIES This work considers the multimodal learning problem as a set modeling problem and presents a scalable method for learning multimodal latent variables and cross-modality generation. Given a dataset {X(i)}Ni=1 of N i.i.d. multimodal samples, we consider each of the sample as a set of M modalities observations X(i) = {x(i)j } M j=1. The multimodal data is assumed to be generated following the successive random process p(X, z) = pθ(X∣z)p(z) which involves an unobserved latent variable z. The prior distribution of the latent variable z is assumed to be pθ(z), with θ denoting its parameters. The marginal log-likelihood of this dataset of multimodal sets can be expressed as a summation of marginal log-likelihood of individual sets as log p(X(i)) as log∏Ni=1 p(X(i)) = ∑Ni=1 log p(X(i)). Since the marginal likelihood of the dataset is intractable, we cannot optimize p({X(i)}Ni=1) with regards to θ directly. We instead introduce the variational approximation qϕ(z∣X) from a parametric family, parameterized by ϕ, as an importance distribution. qϕ(z∣X) is often parameterized by a neural network with ϕ as its trainable parameters. Together, we can express the marginal log-likelihood of a single multimodal set as: log p(X(i)) = DKL(qϕ(z∣X(i))∣∣pθ(z∣X(i))) + L(ϕ, θ;X(i)) L(ϕ, θ;X(i)) = Ez∼qϕ(z∣X(i)) [log pθ(X (i)), z) − log qϕ(z∣X(i))] = −DKL(qϕ(z∣X(i))∣∣pθ(z)) + Ez∼qϕ(z∣X (i)) [log pθ(X(i)∣z)] (1) , where DKL(⋅∣∣⋅) is the Kullback-Leibler (KL) divergence between two distributions. The nonnegative property of the KL divergence term between the variational approximation qϕ(z∣X(i)) and the true posterior pθ(z∣X(i)) in the first line makes L(ϕ, θ;X(i)) the natural evidence lower bound (ELBO) for the marginal log-likelihood. The last line indicates that maximizing the ELBO is equivalent to maximizing the reconstruction performance and regulating the variational approximation using the assumed prior distribution for the latent variable. To avoid confusion, we term neural networks used for mapping the raw input observations into a fixed-sized feature vector as the embedding network while the neural network used to parameterize the variational approximation qϕ(z∣X(i)) as the encoder network. A frequently used version of the objective function is written as: argmin ϕ − βDKL(qϕ(z∣X(i))∣∣p(z)) + Ez∼qϕ(z∣X(i)) [λ log p(X (i)∣z)] (2) , where additional annealing coefficients β and reweighting coefficient λ are used in the ELBO to allow gradients and warm-up training which gradually increases the regularization effect from the prior distribution and avoids reaching local minima in the early training stage Bowman et al. (2015); Sønderby et al. (2016). We drop the superscript of X(i) to maintain brevity in the following paper. 3.2 SET MULTIMODAL VARIATIONAL AUTOENCODER In multimodal scenarios with missing modalities, we consider each sample Xs = {xi∣ithmodaltiy present} as a subset of X and the powerset P(X) denoting all the 2M combinations, such that Xs ∈ P(X). Our goal is to perform inference and generation from any number and permutation of available modalities, which requires an inference process is invariant to permutations and input of variable size. Following Definition 1, we denotes the invariant inference process as p(z∣Xs) = p(z∣π⋅Xs). The ELBO for a subset Xs can be written as Eq.3. Ls(ϕ, θ;Xs) = −DKL(qϕ(z∣Xs)∣∣pθ(z)) + Ez∼qϕ(z∣Xs) [log pθ(Xs∣z)] (3) Definition 1 Let Sn be a set of all permutations of indices 1,⋯, N , X = (x1,⋯xn) denotes n random variables. A probabilistic distribution p(y∣X) is permutation inariant if and only if for any permutation π ∈ Sn, p(y∣X) = p(y∣π⋅X), where ⋅ is the group action. The difference between L(ϕ, θ;X) in Eq.1 and Ls(ϕ, θ;Xs) in Eq.3 is that the ELBO for a subset Xs is not yet a valid bound for log p(X) by itself. Additional sampling from P(X) in the optimization objective as Eq.4 is needed for theoretical completeness. argmin ϕ ∑ Xs∼P(X) π∈Sn Ls(ϕ, θ;π⋅Xs) (4) , where π is a randomly generated permutation to the input subset Xs. However, this sampling process can be trivial if we combine the sampling of the subsets with the sampling of mini-batch during training. By assuming the Gaussian form of the latent variable z and applying the reparameterization technique, the inference process of SMVAE can be written as: p(z∣xs) ∼ N (µ, σ2), ϵ ∼ N (0, I) (5) z ∶= µ + σ ⊙ ϵ (6) µz, log σ 2 z ∶= gϕ(E1(x1),⋯, Em(xm)) (7) , where Ei are embedding network for the i th modality, gϕ(⋅) is a neural network with trainable parameters ϕ that provide the parameter for the latent’s posterior distribution (i.e., µ and σ) , ⊙ denotes the element-wise multiplication. For the generation process, it is desired to models the joint likelihood of modalities conditioned on the latent variables pθ(xs, z) = p(z)pθ(xs∣z) so that the model can utilize information from other available modalities more easier when generating a complex modality. However, for the sake of easy implementation, we assign n separate decoders D1,⋯, DM for all possible modalities as pθ(xs∣z) = [Dθ1(z),⋯, DθM (z)]. We find empirically that, without loss of generality, using L2−normalization as additional regularization to regulate the parameter oµ and σ of the inference network to 0 and 1 respectively could facilitate the learning efficiency because the gradient from the ELBO often favors the reconstruction term over the regularization term. 3.3 SET REPRESENTATION FOR JOINT DISTRIBUTION The scalability issue comes from the requirement for an inference process for the powerset P(X). We achieve scalability by using the noise-outsourced functional representation, i.e. z = g(ϵ,Xs), to bridge the gap between the deterministic set functions to a stochastic function. The properties of the deterministic function thus can be passed to the stochastic distribution under minor conditions Bloem-Reddy & Teh (2020). With such a foundation, the problem of modeling the posterior for a superset immediately reduces to designing a differentiable deterministic function that has the desired invariant or elastic properties. Specifically, we identify four critical requirements for weaklysupervised multimodal learning. Being that the model should 1) be scalable in the number of observable modalities; 2) be able to process input modalities sets of arbitrary size and permutation; 3) satisfy Theorem 1; and 4) be able to learn the co-representation among all modalities. Theorem 1 A valid set function f(x) is invariant to the permutation of instances, iif it can be decomposed in the form Φ(∑Ψ(x)), for any suitable transformations Φ and Ψ. An oversimplified example of a set function can be summation or product as done in MVAE Wu & Goodman (2018) and MMVAE Shi et al. (2019). Pooling operations such as average pooling or max pooling also fit the definition. However, these set aggregation operations will require additional factorization assumptions to the joint posterior and ultimately forbid the VAE to learn corepresentation of the input modalities as aggregation is only applied at the decision level. To establish the inductive bias of inter-modality correlation, the self-attention mechanism without positional embeddings is a reasonable choice Edelman et al. (2022); Shvetsova et al. (2022). Therefore, the proposed SMVAE leverages self-attention as the deterministic set function to aggregate embeddings of multimodal inputs. Given the query Q, key K and value V , an attention function is denoted as Att(Q,K, V ) = ω(QK T √ dk )V , where K ∈ Rm×dk and V ∈ Rm×dv are m vectors of dimension dk and dv , Q ∈ R n×dq are n vectors of dimension dq , ω is the softmax activation function. In our case, the key-value pairs represent the m available embeddings of input modalities, m ≤ M . Each embedding is mapped to a d−dimensional embedding space by a modality-specific embedding network. By measuring the compatibility of the corresponding key and the query Q, information that is shared among modalities is aggregated as co-representation. In practice, we utilize the multi-head extension of self-attention denoted as MultiHead(Q,K, V, h) = Concat(A1,⋯, Ah)W o, where Ai = Atti(QWQi ,KW K i , V W v i ) is obtained from the ith attention function with projection parameters WQi ∈ R (d/h)×dq ,WKi ∈ R (d/h)×dk , WVi ∈ R (d/h)×dk and W o ∈ Rdv×d, h denotes the total number of attention heads and d denotes the dimension of the projections for keys, values and queries. Inspired by Lee et al. (2019), we design our deterministic set representation function gϕ(Xs) as follows: gϕ(Xs) ∶= H + fs(H) H = I +MultiHead(I,Xs,Xs, h) (8) , where I ∈ R1×dv is an dv-dimensional trainable vector as the query vector for multimodal embeddings. fs is a fully-connected layer. By calculating attention weights using I and each embedding. Not only does I work as an aggregation vector that regulates the number of output vectors from gϕ(Xs) to be constant regardless of the number of input embeddings, but also it selects relevant information from each embedding base on similarity measurement. The former justifies gϕ(Xs) as a suitable permutation invariant set-processing function while the latter yields the desired co-representation among modalities. Finally, Since the set representation function gϕ(Xs) is invariant to the input permutations of different input sizes, we achieved an invariant inference probabilistic function that satisfies Definition 1 through the noise-outsourced process as shown in Eq. 6. Thus, by introducing the set representation function in the noise-outsourced process, the SMVAE is readily a scalable multimodal model for any subsets of modalities. 3.4 TOTAL CORRELATION OPTIMIZATION WITHOUT CONDITION INDEPENDENCE The lower bound of the multimodal data without factorizing the joint posterior (i.e., Eq. 1) provides additional information about the correlations of modalities during the optimization process compared to factorized methods. It is noteworthy that both MVAE and MMVAE depend on the assumption of conditional independence between modalities in factorization. Without loss of generality, the relation between L(ϕ, θ;X) and the factorized case LCI can be shown in Eq. 9. L(ϕ, θ;X) = Eqϕ(z∣X) [log pθ(z)∏Mi=1 pθ(xi ∣ z) qϕ(z ∣ X) + log pθ(X, z) pθ(z)∏Mi=1 p(xi ∣ z) ] = LCI+ Eqϕ(z∣X) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] (9) , where X ≡ (x1,⋯,xM) and LCI is the lower bound for factorizable generative process as MVAE or MMVAE. Specifically, let q(X) denotes the empirical distribution for the multimodal dataset, we have: Eq(X) [L(ϕ, θ;X)] = Eq(X) [LCI] + Ez∼ q(X)qϕ(z∣X) pθ (X∣z) ⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ EX∼pθ(X∣z) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ conditional total correlation ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (10) , which reveals that without assuming a factorizable generative process and enforcing conditional independence among modalities, our optimization objective naturally models the conditional total correlation which provides information of dependency among multiple input modalities Watanabe (1960); Studenỳ & Vejnarová (1998). Therefore, the SMVAE has the additional advantage of learning correlations among different modalities of the same event, which is also what we desired for good co-representation. 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS We make use of uni-modal datasets including MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017) and CelebA Liu et al. (2015) to evaluate the performance of the proposed SMVAE and compare with other state-of-the-art methods. We convert these uni-modal datasets into bi-modal dataset by transforming the labels to one-hot vectors as the second modality as in Wu & Goodman (2018); Suzuki et al. (2016). For quatitative evaluation, we denote x1 and x2 as the image and text modality and measure the marginal log-likelihood, log p(x) ≈ logEq(z∣⋅)[p(x∣z)p(z)q(z∣⋅) ], the joint likelihood log p(x,y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ], and the marginal conditional probability, log p(x∣y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ]−logEp(z)[p(y∣z)], using data samples from the test set. q(z∣⋅) denotes the importance distribution. For all the multimodal VAE methods, we keep the architecture of encoders and decoders consistent for a fair comparison. Detailed training configurations and settings of the networks are listed in Appendix. B. The marginal probabilities measure the model’s ability to capture data distributions while the conditional log probability measures classification performance. Higher scoring of these matrics means the better a model is able to generate proper samples and convert between modalities. These are the desirable properties for learning a generative model. 4.2 GENERATION QUALITY AND QUANTITATIVE EVALUATION We obtain 1000 important samples to estimate the probability metrics. Table 1 shows the quatitative results of the proposed SMVAE for each dataset. We can see that the SMVAE outperforms other methods in almost all metrics. The outstanding of SMVAE mainly contributes to the direct modeling of the joint posterior distribution and optimization on a more informative objective. Fig. 3, Fig. 4 and Fig. 5 show cross-modality generation of image samples for each domain generated by the SMVAE model. We can see that given the text modality only, the SMVAE can generate corresponding images of good quality. We further visualize the learned latent representation using tSNE Hinton & Roweis (2002). As shown in Fig.2, latent space learned by MVAE method can only produce cohesive latent representation when both modalities are presented. When one modality is missing, representations from their method are distributed irrespective of the semantic category of the data. On the other hand, although the MMVAE method achieves cohesive representation for single-modality posterior, their joint representation is less discriminative. Indicating that using only the combination of uni-modal inference networks is insufficient to capture intermodality co-representation. Nonetheless, our SMVAE method can achieve discriminative latent space for both single- and joint-modality inputs thanks to its ability to exploit shared information from different modalities. 4.3 CASE STUDY: COMPUTER VISION APPLICATION We demonstrate that our SMVAE is able to learn image transformations including colorization, edge detection, facial landmark segmentation, image completion, and watermark removal. With original image and each transformation as different modalities, we obtain 6 modalities in total by applying different transformations to the ground-truth images for this multimodal setting. This case study demonstrates the SMVAE’s ability to generate in multiple directions and combinations. Similar to Wu & Goodman (2018), for edge detection, we use Canny detector Canny (1986) from Scikit-Image module Van der Walt et al. (2014) to extract edges of the facial image. For facial landmark segmentation, we use Dlib tool King (2009) and OpenCV Bradski & Kaehler (2000). For colorization, we simply convert RGB colors to grayscale. For watermark removal, we add a watermark overlay to the original image. For image completion, we replace half of the image with black pixels. Fig.6 shows the samples generated from a trained SMVAE model. As can be seen in Fig.6(a), the SMVAE generates a good reconstruction of the facial landmark segmentation and extracted edges. In Fig.6(b), we can see that the SMVAE is able to put reasonable facial color to the input grayscale image. Fig.6(c) demonstrates that the SMVAE can recover the image from the watermark and complete the image quite well. The reconstructed right half of the image is basically agreed on the left half of the original image. In Fig.6(d), all traces of the watermark is also removed. Although our reconstructed images suffer from the same blurriness problem that is shared in VAE methods Zhao et al. (2017), the SMVAE is able to perform cross-modality generation thanks to its ability to capture share information among modalities. 4.4 CASE STUDY: ROBOTICS CONTROL APPLICATION The second case study shows that our method is readily applicable in robotics control scenarios using Vision&Touch datasetLiang et al. (2021). We use the SMVAE to learn cross-modality generation from continuous sensory input to images. Emerging human-in-the-loop shared autonomy systems are often equipped with multiple sensors, which pose a high requirement to the model’s ability to learn co-representationLee et al. (2020); Luo et al. (2021); Chen et al. (2021); Selvaggio et al. (2021); Newman et al. (2022); Li et al. (2021). The Vision&Touch dataset is a real-world robot manipulation dataset that contains visual, tactile, control action, and robot proprioception data which pocess more diverse modalities. The robotic arm attempts to insert the peg located on its tip into the target object. We use a total of 4 modalities including the depth images, RGB images, the 6-axis force sensor feedbacks, and the control action given to the robotics arm in each time step. Fig. 7(a) illustrates that as the robotic arm is not receiving force signals in early steps, reconstruction results of the RGB image show clearly that the arm has no contact with the taget box below. Only when the robotic arm is receiving high force readings, the generated image depicts the contact between the robotics arm and the target box. The quality of the reconstructed rgb and depth images is also differ between partial observation and full observation. While only limited information is observed (i.e., force and action inputs), our method is only able to reconstruct rgb and depth images that can properly reflex the relative posistion between the robotic arm and the target object (Fig. 7(a)). But when more information is presented, the latent variables can have more comprehensive information about the event and better reconstruction result as we removed the conditional independence assumption (Fig. 7(b)). 5 CONCLUSION This paper proposes a multimodal generative model by incorporating the set representation learning in the VAE framework. Unlike the previous multimodal VAE methods, the proposed SMVAE method provides a scalable solution for multimodal data of variable size and permutations. Critically, our model learns the joint posterior distribution directly without additional assumptions for factorization, yielding a more informative objective and the ability to achieve co-representation between modalities. Statistical and visualization results demonstrate that our method excels with other state-of-the-art multimodal VAE methods. Which has high potential in emerging multimodal tasks that need to learn co-representation of diverse data sources while taking missing modality problems or set-input processing problems into consideration. Application on cross-modality reconstruction in robotic dataset further indicates the proposed SMVAE has high potential in emerging multimodal tasks. In the future, we will explore methods that extend the current SMVAE framework to more diverse modalities as well as dynamic multimodal sequences to provide solutions for real-world multimodal applications.
1. What is the focus and contribution of the paper on multimodal VAEs? 2. What are the strengths of the proposed approach, particularly in its idea and motivation? 3. What are the weaknesses of the paper, especially regarding its evaluation and term definitions? 4. Do you have any questions regarding the methodology and its components? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a multimodal VAE, which includes an intermediate set representation. This fixed-size representation of every modality is then used for the mapping to the variational approximation. The authors show the performance of their proposed method on a computer vision study, a label-image dataset, and a robotics application. Strengths And Weaknesses Strengths Idea: I find the idea of a set-based approach to multimodal learning interesting Motivation: the idea is well-motivated including related work. Weaknesses Evaluation: Previous Work (MMVAE, MoPoE) has used datasets with more than 2 modalities, which is an additional level of difficulty, but highlights the sensitivity of different methods to the number of modalities. MMVAE and MoPoE (and others) have highlighted the trade-off between the quality of generated samples (can be measured using test set log-likelihoods), coherence of samples (could be measured in conditional coherence), and a meaningful latent representation (accuracy of latent representation classification). The evaluation performed in this paper seems limited with only reporting quantitative results for log-likelihoods, but not the other metrics. Clarity: Not all terms are well-defined. For instance, I could not find a definition of co-representation. Co-representation seems to be an important term because it is repeatedly used. But without giving a clear definition of it. It is unclear to me where the performance gains come from (Table 1). Is it the set-based formulation? Or the additional conditional total correlation term? In my opinion, it would strengthen the paper if more insights are given into what parts of the proposed method are responsible for the performance improvements, and how sensitive the method to hyperparameters is (see Questions) Questions How is p ( X | z ) calculated? What is the cost of that? Does the proposed framework also work with simpler set functions? The objective in eq 10 does not define the set function. Hence, it should work with any function that fulfills theorem 1. Are there any insights on that? Are there additional hyperparameters needed to tune the optimization of eq. 10? If yes, are there any sensitivity analyses with respect to hyperparameter selection? At the end of section 3.2., the authors mention the use of additional L2-regularization. What is the effect of this regularization? To which term is it exactly applied? What is the regularization weight? It would make the final loss function of the method clear if more details would be provided. What is a co-representation? To me, the term is not clear, and I could not find a definition in this submission. In Section 3.3. the authors say that former multimodal VAEs are not able to learn co-representation. Is there any proof or reference for that? Or at least empirical evidence? Clarity, Quality, Novelty And Reproducibility Clarity The proposed work lacks clarity in some parts. It is not clear to me what makes the work perform well and not all terms are correctly defined (see Weaknesses and Questions). Quality The paper lacks quality with respect to evaluation and a clear description of all building blocks. The idea of using the set function for multimodal data is interesting. Novelty The paper presents a novel application of the set-based method to multimodal VAEs, which from a novelty point-of-view is enough. Reproducibility The authors provide details for all architectures and networks used. I am missing details of the embedding sizes and other hyperparameters and their sensitivity.
ICLR
Title Generaling Multimodal Variational Methods to Sets Abstract Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-ofexperts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-ofthe-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/. 1 INTRODUCTION Most real-life applications such as robotic systems, social media mining, and recommendation systems naturally contain multiple data sources, which raise the need for learning co-representation among diverse modalities Lee et al. (2020). Making use of additional modalities should improve the general performance of downstream tasks as it can provide more information from another perspective. In literatures, substantial improvements can be achieved by utilizing another modality as supplementary information Asano et al. (2020); Nagrani et al. (2020) or by multimodal fusion Atrey et al. (2010); Hori et al. (2017); Zhang et al. (2021). However, current multimodal research suffers severely from the lack of multimodal data with fine-grained labeling and alignment Sun et al. (2017); Beyer et al. (2020); Rahate et al. (2022); Baltrušaitis et al. (2018) and the missing of modalities Ma et al. (2021); Chen et al. (2021). In the self-supervised and weakly-supervised learning field, the variational autoencoders (VAEs) for multimodal data Kingma & Welling (2013); Wu & Goodman (2018); Shi et al. (2019); Sutter et al. (2021) have been a dominating branch of development. VAEs are generative self-supervised models by definition that capture the dependency between an unobserved latent variable and the input observation. To jointly infer the latent representation and reconstruct the observations properly, the multimodal VAEs are required to extract both modality-specific and modality-invariant features from the multimodal observations. Earlier works mainly suffer from scalability issues as they need to learn a separate model for each modal combination Pandey & Dukkipati (2017); Yan et al. (2016). More recent multimodal VAEs handle this issue and achieves scalability by approximating the true joint posterior distribution with the mixture or the product of uni-modality inference models Shi et al. (2019); Wu & Goodman (2018); Sutter et al. (2021). However, our key insight is that their methods suffer from two critical drawbacks: 1) The implied conditional independence assumption and corresponding factorization deviate their VAEs from modeling inter-modality correlations. 2) The aggregation of inference results from uni-modality is by no means a co-representation of these modalities. To overcome these drawbacks of previous VAE methods, this work proposes the Set Multimodal Variational Autoencoder (SMVAE), a novel multimodel generative model eschewing factorization and instead relying solely upon set operation to achieve scalability. The SMVAE allows for better performance compared to the latest multimodal VAE methods and can handle input modalities of variable numbers and permutations. By learning the actual multimodal joint posterior directly, the SMVAE is the first multimodal VAE method that achieves scalable co-representation with missing modalities. A high-level overview of the proposed method is illustrated in Fig.1. The SMVAE can handle a set of maximally M modalities as well as their subsets and allows cross-modality generations. Ei and Di represent the i−th embedding network and decoder network for the specific modality. µs, σs and µk, σk represent the parameters for the posterior distribution of the latent variable. By incorporating set operation when learning the joint-modality posterior, we can simply drop the corresponding embedding networks when a modality is missing. Comprehensive experiments show the proposed Set Multimodal Variational Autoencoder (SMVAE) outperforms state-of-the-art multimodal VAE methods and is immediately applicable to real-life multimodality. 2 RELATED WORK 2.1 MULTIMODALITY VAES The core problem of learning a multimodal generative model is to maintain the model’s scalability to the exponential number of modal combinations. Existing multimodal generative models such as Conditional VAE (CVAE)Pandey & Dukkipati (2017) and joint-modality VAE (JMVAE) Suzuki et al. (2016) had difficulty scaling since they need to assign a separate inference model for each possible input and output combinations. To tackle this issue, follow-up works, such as, TELBO Vedantam et al. (2017), MVAE Wu & Goodman (2018), MMVAE Shi et al. (2019), MoPoE Sutter et al. (2021), assume the variational approximation is factorizable. Thus, they focused on factorizing the approximation of the multimodal joint posterior q(z∣x1,⋯,xM) into a set of uni-modality inference encoders qi(z∣xi), such that q(z∣x1,⋯,xM) ≈ F ({xi}Mi=1), where F (⋅) is a product or mean operation, depending on the chosen aggregation method. As discussed in Sutter et al. (2021), these scalable multimodal VAE methods differ only in the choice of aggregation method. Different from those mentioned above multimodal VAE methods, we attain the joint posterior in its original form without introducing additional assumptions on the form of the joint posterior. To handle the issue of scalability, we exploit the deterministic set operation function in the noise-outsourcing process. While existing multimodal VAE methods can be viewed as typical late fusion method that combines decisions about the latent variables Khaleghi et al. (2013), the proposed SMVAE method corresponds to the early fusion method at the representation level, allowing for the learning of correlation and co-representation from multimodal data. 2.2 METHODS FOR SET-INPUT PROBLEMS Multiple instance learning (MIL) Carbonneau et al. (2018) and 3D shape recognition Su et al. (2015); Hofer et al. (2005); Wu et al. (2015), are well-known examples of weakly-supervised learning problems that deal with set-input. MIL handles training data as numerous sets of instances with only set-level labels. A typical way to solve set-level classification problems is to use pooling methods for information aggregation Shao et al. (2021). Recently, Lee et al. (2019) observed that classical feed-forward neural networks like the multi-layer perception (MLP) Murtagh (1991) cannot guarantee invariance under the permutation of the elements in the input as well as the input of arbitrary sizes. Furthermore, recursive neural networks such as RNN and LSTM Hochreiter & Schmidhuber (1997) are sensitive to the order of the input sequences, and cannot fit the multimodal case since there is no natural order for modalities. Recently, Deep Sets Zaheer et al. (2017) provided a formal definition for a permutation invariant function in set-input problems and proposed a universal approximator for arbitrary set functions. Later on, Set Transformer Lee et al. (2019) further extends this idea by using the self-attention mechanism to provide interactions as well as information aggregation among elements from an input set. However, their method only models a set of outputs as a deterministic function. Our work fills the gap between a deterministic set function to a probabilistic distribution and applies it to multimodal unsupervised learning. 3 PROPOSED METHOD 3.1 PRELIMINARIES This work considers the multimodal learning problem as a set modeling problem and presents a scalable method for learning multimodal latent variables and cross-modality generation. Given a dataset {X(i)}Ni=1 of N i.i.d. multimodal samples, we consider each of the sample as a set of M modalities observations X(i) = {x(i)j } M j=1. The multimodal data is assumed to be generated following the successive random process p(X, z) = pθ(X∣z)p(z) which involves an unobserved latent variable z. The prior distribution of the latent variable z is assumed to be pθ(z), with θ denoting its parameters. The marginal log-likelihood of this dataset of multimodal sets can be expressed as a summation of marginal log-likelihood of individual sets as log p(X(i)) as log∏Ni=1 p(X(i)) = ∑Ni=1 log p(X(i)). Since the marginal likelihood of the dataset is intractable, we cannot optimize p({X(i)}Ni=1) with regards to θ directly. We instead introduce the variational approximation qϕ(z∣X) from a parametric family, parameterized by ϕ, as an importance distribution. qϕ(z∣X) is often parameterized by a neural network with ϕ as its trainable parameters. Together, we can express the marginal log-likelihood of a single multimodal set as: log p(X(i)) = DKL(qϕ(z∣X(i))∣∣pθ(z∣X(i))) + L(ϕ, θ;X(i)) L(ϕ, θ;X(i)) = Ez∼qϕ(z∣X(i)) [log pθ(X (i)), z) − log qϕ(z∣X(i))] = −DKL(qϕ(z∣X(i))∣∣pθ(z)) + Ez∼qϕ(z∣X (i)) [log pθ(X(i)∣z)] (1) , where DKL(⋅∣∣⋅) is the Kullback-Leibler (KL) divergence between two distributions. The nonnegative property of the KL divergence term between the variational approximation qϕ(z∣X(i)) and the true posterior pθ(z∣X(i)) in the first line makes L(ϕ, θ;X(i)) the natural evidence lower bound (ELBO) for the marginal log-likelihood. The last line indicates that maximizing the ELBO is equivalent to maximizing the reconstruction performance and regulating the variational approximation using the assumed prior distribution for the latent variable. To avoid confusion, we term neural networks used for mapping the raw input observations into a fixed-sized feature vector as the embedding network while the neural network used to parameterize the variational approximation qϕ(z∣X(i)) as the encoder network. A frequently used version of the objective function is written as: argmin ϕ − βDKL(qϕ(z∣X(i))∣∣p(z)) + Ez∼qϕ(z∣X(i)) [λ log p(X (i)∣z)] (2) , where additional annealing coefficients β and reweighting coefficient λ are used in the ELBO to allow gradients and warm-up training which gradually increases the regularization effect from the prior distribution and avoids reaching local minima in the early training stage Bowman et al. (2015); Sønderby et al. (2016). We drop the superscript of X(i) to maintain brevity in the following paper. 3.2 SET MULTIMODAL VARIATIONAL AUTOENCODER In multimodal scenarios with missing modalities, we consider each sample Xs = {xi∣ithmodaltiy present} as a subset of X and the powerset P(X) denoting all the 2M combinations, such that Xs ∈ P(X). Our goal is to perform inference and generation from any number and permutation of available modalities, which requires an inference process is invariant to permutations and input of variable size. Following Definition 1, we denotes the invariant inference process as p(z∣Xs) = p(z∣π⋅Xs). The ELBO for a subset Xs can be written as Eq.3. Ls(ϕ, θ;Xs) = −DKL(qϕ(z∣Xs)∣∣pθ(z)) + Ez∼qϕ(z∣Xs) [log pθ(Xs∣z)] (3) Definition 1 Let Sn be a set of all permutations of indices 1,⋯, N , X = (x1,⋯xn) denotes n random variables. A probabilistic distribution p(y∣X) is permutation inariant if and only if for any permutation π ∈ Sn, p(y∣X) = p(y∣π⋅X), where ⋅ is the group action. The difference between L(ϕ, θ;X) in Eq.1 and Ls(ϕ, θ;Xs) in Eq.3 is that the ELBO for a subset Xs is not yet a valid bound for log p(X) by itself. Additional sampling from P(X) in the optimization objective as Eq.4 is needed for theoretical completeness. argmin ϕ ∑ Xs∼P(X) π∈Sn Ls(ϕ, θ;π⋅Xs) (4) , where π is a randomly generated permutation to the input subset Xs. However, this sampling process can be trivial if we combine the sampling of the subsets with the sampling of mini-batch during training. By assuming the Gaussian form of the latent variable z and applying the reparameterization technique, the inference process of SMVAE can be written as: p(z∣xs) ∼ N (µ, σ2), ϵ ∼ N (0, I) (5) z ∶= µ + σ ⊙ ϵ (6) µz, log σ 2 z ∶= gϕ(E1(x1),⋯, Em(xm)) (7) , where Ei are embedding network for the i th modality, gϕ(⋅) is a neural network with trainable parameters ϕ that provide the parameter for the latent’s posterior distribution (i.e., µ and σ) , ⊙ denotes the element-wise multiplication. For the generation process, it is desired to models the joint likelihood of modalities conditioned on the latent variables pθ(xs, z) = p(z)pθ(xs∣z) so that the model can utilize information from other available modalities more easier when generating a complex modality. However, for the sake of easy implementation, we assign n separate decoders D1,⋯, DM for all possible modalities as pθ(xs∣z) = [Dθ1(z),⋯, DθM (z)]. We find empirically that, without loss of generality, using L2−normalization as additional regularization to regulate the parameter oµ and σ of the inference network to 0 and 1 respectively could facilitate the learning efficiency because the gradient from the ELBO often favors the reconstruction term over the regularization term. 3.3 SET REPRESENTATION FOR JOINT DISTRIBUTION The scalability issue comes from the requirement for an inference process for the powerset P(X). We achieve scalability by using the noise-outsourced functional representation, i.e. z = g(ϵ,Xs), to bridge the gap between the deterministic set functions to a stochastic function. The properties of the deterministic function thus can be passed to the stochastic distribution under minor conditions Bloem-Reddy & Teh (2020). With such a foundation, the problem of modeling the posterior for a superset immediately reduces to designing a differentiable deterministic function that has the desired invariant or elastic properties. Specifically, we identify four critical requirements for weaklysupervised multimodal learning. Being that the model should 1) be scalable in the number of observable modalities; 2) be able to process input modalities sets of arbitrary size and permutation; 3) satisfy Theorem 1; and 4) be able to learn the co-representation among all modalities. Theorem 1 A valid set function f(x) is invariant to the permutation of instances, iif it can be decomposed in the form Φ(∑Ψ(x)), for any suitable transformations Φ and Ψ. An oversimplified example of a set function can be summation or product as done in MVAE Wu & Goodman (2018) and MMVAE Shi et al. (2019). Pooling operations such as average pooling or max pooling also fit the definition. However, these set aggregation operations will require additional factorization assumptions to the joint posterior and ultimately forbid the VAE to learn corepresentation of the input modalities as aggregation is only applied at the decision level. To establish the inductive bias of inter-modality correlation, the self-attention mechanism without positional embeddings is a reasonable choice Edelman et al. (2022); Shvetsova et al. (2022). Therefore, the proposed SMVAE leverages self-attention as the deterministic set function to aggregate embeddings of multimodal inputs. Given the query Q, key K and value V , an attention function is denoted as Att(Q,K, V ) = ω(QK T √ dk )V , where K ∈ Rm×dk and V ∈ Rm×dv are m vectors of dimension dk and dv , Q ∈ R n×dq are n vectors of dimension dq , ω is the softmax activation function. In our case, the key-value pairs represent the m available embeddings of input modalities, m ≤ M . Each embedding is mapped to a d−dimensional embedding space by a modality-specific embedding network. By measuring the compatibility of the corresponding key and the query Q, information that is shared among modalities is aggregated as co-representation. In practice, we utilize the multi-head extension of self-attention denoted as MultiHead(Q,K, V, h) = Concat(A1,⋯, Ah)W o, where Ai = Atti(QWQi ,KW K i , V W v i ) is obtained from the ith attention function with projection parameters WQi ∈ R (d/h)×dq ,WKi ∈ R (d/h)×dk , WVi ∈ R (d/h)×dk and W o ∈ Rdv×d, h denotes the total number of attention heads and d denotes the dimension of the projections for keys, values and queries. Inspired by Lee et al. (2019), we design our deterministic set representation function gϕ(Xs) as follows: gϕ(Xs) ∶= H + fs(H) H = I +MultiHead(I,Xs,Xs, h) (8) , where I ∈ R1×dv is an dv-dimensional trainable vector as the query vector for multimodal embeddings. fs is a fully-connected layer. By calculating attention weights using I and each embedding. Not only does I work as an aggregation vector that regulates the number of output vectors from gϕ(Xs) to be constant regardless of the number of input embeddings, but also it selects relevant information from each embedding base on similarity measurement. The former justifies gϕ(Xs) as a suitable permutation invariant set-processing function while the latter yields the desired co-representation among modalities. Finally, Since the set representation function gϕ(Xs) is invariant to the input permutations of different input sizes, we achieved an invariant inference probabilistic function that satisfies Definition 1 through the noise-outsourced process as shown in Eq. 6. Thus, by introducing the set representation function in the noise-outsourced process, the SMVAE is readily a scalable multimodal model for any subsets of modalities. 3.4 TOTAL CORRELATION OPTIMIZATION WITHOUT CONDITION INDEPENDENCE The lower bound of the multimodal data without factorizing the joint posterior (i.e., Eq. 1) provides additional information about the correlations of modalities during the optimization process compared to factorized methods. It is noteworthy that both MVAE and MMVAE depend on the assumption of conditional independence between modalities in factorization. Without loss of generality, the relation between L(ϕ, θ;X) and the factorized case LCI can be shown in Eq. 9. L(ϕ, θ;X) = Eqϕ(z∣X) [log pθ(z)∏Mi=1 pθ(xi ∣ z) qϕ(z ∣ X) + log pθ(X, z) pθ(z)∏Mi=1 p(xi ∣ z) ] = LCI+ Eqϕ(z∣X) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] (9) , where X ≡ (x1,⋯,xM) and LCI is the lower bound for factorizable generative process as MVAE or MMVAE. Specifically, let q(X) denotes the empirical distribution for the multimodal dataset, we have: Eq(X) [L(ϕ, θ;X)] = Eq(X) [LCI] + Ez∼ q(X)qϕ(z∣X) pθ (X∣z) ⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ EX∼pθ(X∣z) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ conditional total correlation ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (10) , which reveals that without assuming a factorizable generative process and enforcing conditional independence among modalities, our optimization objective naturally models the conditional total correlation which provides information of dependency among multiple input modalities Watanabe (1960); Studenỳ & Vejnarová (1998). Therefore, the SMVAE has the additional advantage of learning correlations among different modalities of the same event, which is also what we desired for good co-representation. 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS We make use of uni-modal datasets including MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017) and CelebA Liu et al. (2015) to evaluate the performance of the proposed SMVAE and compare with other state-of-the-art methods. We convert these uni-modal datasets into bi-modal dataset by transforming the labels to one-hot vectors as the second modality as in Wu & Goodman (2018); Suzuki et al. (2016). For quatitative evaluation, we denote x1 and x2 as the image and text modality and measure the marginal log-likelihood, log p(x) ≈ logEq(z∣⋅)[p(x∣z)p(z)q(z∣⋅) ], the joint likelihood log p(x,y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ], and the marginal conditional probability, log p(x∣y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ]−logEp(z)[p(y∣z)], using data samples from the test set. q(z∣⋅) denotes the importance distribution. For all the multimodal VAE methods, we keep the architecture of encoders and decoders consistent for a fair comparison. Detailed training configurations and settings of the networks are listed in Appendix. B. The marginal probabilities measure the model’s ability to capture data distributions while the conditional log probability measures classification performance. Higher scoring of these matrics means the better a model is able to generate proper samples and convert between modalities. These are the desirable properties for learning a generative model. 4.2 GENERATION QUALITY AND QUANTITATIVE EVALUATION We obtain 1000 important samples to estimate the probability metrics. Table 1 shows the quatitative results of the proposed SMVAE for each dataset. We can see that the SMVAE outperforms other methods in almost all metrics. The outstanding of SMVAE mainly contributes to the direct modeling of the joint posterior distribution and optimization on a more informative objective. Fig. 3, Fig. 4 and Fig. 5 show cross-modality generation of image samples for each domain generated by the SMVAE model. We can see that given the text modality only, the SMVAE can generate corresponding images of good quality. We further visualize the learned latent representation using tSNE Hinton & Roweis (2002). As shown in Fig.2, latent space learned by MVAE method can only produce cohesive latent representation when both modalities are presented. When one modality is missing, representations from their method are distributed irrespective of the semantic category of the data. On the other hand, although the MMVAE method achieves cohesive representation for single-modality posterior, their joint representation is less discriminative. Indicating that using only the combination of uni-modal inference networks is insufficient to capture intermodality co-representation. Nonetheless, our SMVAE method can achieve discriminative latent space for both single- and joint-modality inputs thanks to its ability to exploit shared information from different modalities. 4.3 CASE STUDY: COMPUTER VISION APPLICATION We demonstrate that our SMVAE is able to learn image transformations including colorization, edge detection, facial landmark segmentation, image completion, and watermark removal. With original image and each transformation as different modalities, we obtain 6 modalities in total by applying different transformations to the ground-truth images for this multimodal setting. This case study demonstrates the SMVAE’s ability to generate in multiple directions and combinations. Similar to Wu & Goodman (2018), for edge detection, we use Canny detector Canny (1986) from Scikit-Image module Van der Walt et al. (2014) to extract edges of the facial image. For facial landmark segmentation, we use Dlib tool King (2009) and OpenCV Bradski & Kaehler (2000). For colorization, we simply convert RGB colors to grayscale. For watermark removal, we add a watermark overlay to the original image. For image completion, we replace half of the image with black pixels. Fig.6 shows the samples generated from a trained SMVAE model. As can be seen in Fig.6(a), the SMVAE generates a good reconstruction of the facial landmark segmentation and extracted edges. In Fig.6(b), we can see that the SMVAE is able to put reasonable facial color to the input grayscale image. Fig.6(c) demonstrates that the SMVAE can recover the image from the watermark and complete the image quite well. The reconstructed right half of the image is basically agreed on the left half of the original image. In Fig.6(d), all traces of the watermark is also removed. Although our reconstructed images suffer from the same blurriness problem that is shared in VAE methods Zhao et al. (2017), the SMVAE is able to perform cross-modality generation thanks to its ability to capture share information among modalities. 4.4 CASE STUDY: ROBOTICS CONTROL APPLICATION The second case study shows that our method is readily applicable in robotics control scenarios using Vision&Touch datasetLiang et al. (2021). We use the SMVAE to learn cross-modality generation from continuous sensory input to images. Emerging human-in-the-loop shared autonomy systems are often equipped with multiple sensors, which pose a high requirement to the model’s ability to learn co-representationLee et al. (2020); Luo et al. (2021); Chen et al. (2021); Selvaggio et al. (2021); Newman et al. (2022); Li et al. (2021). The Vision&Touch dataset is a real-world robot manipulation dataset that contains visual, tactile, control action, and robot proprioception data which pocess more diverse modalities. The robotic arm attempts to insert the peg located on its tip into the target object. We use a total of 4 modalities including the depth images, RGB images, the 6-axis force sensor feedbacks, and the control action given to the robotics arm in each time step. Fig. 7(a) illustrates that as the robotic arm is not receiving force signals in early steps, reconstruction results of the RGB image show clearly that the arm has no contact with the taget box below. Only when the robotic arm is receiving high force readings, the generated image depicts the contact between the robotics arm and the target box. The quality of the reconstructed rgb and depth images is also differ between partial observation and full observation. While only limited information is observed (i.e., force and action inputs), our method is only able to reconstruct rgb and depth images that can properly reflex the relative posistion between the robotic arm and the target object (Fig. 7(a)). But when more information is presented, the latent variables can have more comprehensive information about the event and better reconstruction result as we removed the conditional independence assumption (Fig. 7(b)). 5 CONCLUSION This paper proposes a multimodal generative model by incorporating the set representation learning in the VAE framework. Unlike the previous multimodal VAE methods, the proposed SMVAE method provides a scalable solution for multimodal data of variable size and permutations. Critically, our model learns the joint posterior distribution directly without additional assumptions for factorization, yielding a more informative objective and the ability to achieve co-representation between modalities. Statistical and visualization results demonstrate that our method excels with other state-of-the-art multimodal VAE methods. Which has high potential in emerging multimodal tasks that need to learn co-representation of diverse data sources while taking missing modality problems or set-input processing problems into consideration. Application on cross-modality reconstruction in robotic dataset further indicates the proposed SMVAE has high potential in emerging multimodal tasks. In the future, we will explore methods that extend the current SMVAE framework to more diverse modalities as well as dynamic multimodal sequences to provide solutions for real-world multimodal applications.
1. What is the focus and contribution of the paper on multimodal VAEs? 2. What are the strengths of the proposed approach, particularly in terms of using self-attention to model correlations between modalities? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. Do you have any concerns regarding the use of attention mechanisms in multimodal VAEs? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper introduces the Set Multimodal VAE (SMVAE), a new type of multimodal VAE that uses self-attention to model the joint posterior as a learnable function of an arbitrary subset of input modalities. While previous approaches use basic aggregation functions (e.g., the product of experts) to aggregate the unimodal embeddings across modalities, the SMVAE employs a self-attention module, i.e., a learnable aggregation function. The paper claims that previous approaches are limited by the conditional independence assumption implied by the respective aggregation function that they use and that the self-attention aggregation is better for modeling correlations between modalities. Empirically, the SMVAE shows slight improvements over some of the existing methods as well as promising results in two case studies. Strengths And Weaknesses Strengths The authors propose a simple yet intuitive and potentially impactful idea: to use a learned aggregation function using an attention mechanism to model the joint posterior in multimodal VAEs. The qualitative and quantitative results on bi-modal image/label datasets look promising. The two case studies show promising results for real-world applications. Weaknesses The authors claim that posteriors with basic aggregation functions (e.g., PoE and MoE) lead to a "defective bound" for the optimization and a loss of semantic information. However, the paper provides insufficient theoretical or empirical evidence to back up these claims. The term "co-representation" does not seem to be defined, neither in the manuscript nor in the cited works. The term is used throughout the manuscript, including in central places like the abstract, where it is stated that "learning the co-representation [...] is a long-standing endeavor". Empirically, the SMVAE shows slight improvements in multimodal generative learning compared to some of the existing methods. However, the quantitative comparison only features very simple multimodal datasets with only two modalities, one of which represents one-hot encoded labels. It would be helpful, if the authors would compare the performance on slightly more realistic multimodal datasets, such as MNIST/SVHN, CUB, and PolyMNIST. There are many components and hyperparameters whose effects on the performance are not sufficiently clear and should be studied with additional ablations. For example: L2-normalization, the aggregation vector in Equation (8), the use of multiple attention heads, no use of positional embeddings, etc. There are several things unclear about the objective, which does not even seem to be defined explicitly. At training time, does the input to the multi-head attention function (Eq. 8) consists of two times the same subset of modalities? Do these two sets differ in their order of modalities? Is the objective a valid ELBO on the complete set of modalities? Where is the proof of Theorem 1? Is it an existing result from previous work? The related work section does not make it sufficiently clear how the proposed idea of using attention to model the correlations between modalities relates to existing multimodal models with attention mechanisms. E.g.: Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun: Attention Bottlenecks for Multimodal Fusion. NeurIPS 2021 Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, João Carreira: Perceiver: General Perception with Iterative Attention. ICML 2021 Xinyang Geng, Hao Liu, Lisa Lee, Dale Schuurams, Sergey Levine, Pieter Abbeel: Multimodal Masked Autoencoders Learn Transferable Representations. CoRR abs/2205.14204 (2022) Clarity, Quality, Novelty And Reproducibility There is a serious lack of clarity and quality. Sections 3.2 and 3.3 are particularly hard to understand and are riddled with typos. The objective should be stated explicitly and the theoretical results would benefit from a clearer statement of the necessary assumptions and the relevance and novelty of the results. The empirical results should be more thorough: the evaluation lacks a comparison with previous methods on more realistic multimodal datasets, and there are no ablation studies for relevant components and hyperparameters. The contributions are potentially significant and somewhat new. Intuitively, the benefit of a learned aggregation function is clear and somewhat novel (at least with respect to multimodal VAEs; see Strengths and Weaknesses). However, the claims about the drawbacks of existing methods need stronger theoretical and empirical support. Similarly, the claims regarding the outstanding performance compared to existing methods requires more thorough experiments. The lack of clarity implies limited reproducibility as the implementation would not be straightforward given the information provided in the text. There are no standard deviations or uncertainty estimates for the experiments. However, it is positive that the authors provide code for their experiments.
ICLR
Title Generaling Multimodal Variational Methods to Sets Abstract Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-ofexperts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-ofthe-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/. 1 INTRODUCTION Most real-life applications such as robotic systems, social media mining, and recommendation systems naturally contain multiple data sources, which raise the need for learning co-representation among diverse modalities Lee et al. (2020). Making use of additional modalities should improve the general performance of downstream tasks as it can provide more information from another perspective. In literatures, substantial improvements can be achieved by utilizing another modality as supplementary information Asano et al. (2020); Nagrani et al. (2020) or by multimodal fusion Atrey et al. (2010); Hori et al. (2017); Zhang et al. (2021). However, current multimodal research suffers severely from the lack of multimodal data with fine-grained labeling and alignment Sun et al. (2017); Beyer et al. (2020); Rahate et al. (2022); Baltrušaitis et al. (2018) and the missing of modalities Ma et al. (2021); Chen et al. (2021). In the self-supervised and weakly-supervised learning field, the variational autoencoders (VAEs) for multimodal data Kingma & Welling (2013); Wu & Goodman (2018); Shi et al. (2019); Sutter et al. (2021) have been a dominating branch of development. VAEs are generative self-supervised models by definition that capture the dependency between an unobserved latent variable and the input observation. To jointly infer the latent representation and reconstruct the observations properly, the multimodal VAEs are required to extract both modality-specific and modality-invariant features from the multimodal observations. Earlier works mainly suffer from scalability issues as they need to learn a separate model for each modal combination Pandey & Dukkipati (2017); Yan et al. (2016). More recent multimodal VAEs handle this issue and achieves scalability by approximating the true joint posterior distribution with the mixture or the product of uni-modality inference models Shi et al. (2019); Wu & Goodman (2018); Sutter et al. (2021). However, our key insight is that their methods suffer from two critical drawbacks: 1) The implied conditional independence assumption and corresponding factorization deviate their VAEs from modeling inter-modality correlations. 2) The aggregation of inference results from uni-modality is by no means a co-representation of these modalities. To overcome these drawbacks of previous VAE methods, this work proposes the Set Multimodal Variational Autoencoder (SMVAE), a novel multimodel generative model eschewing factorization and instead relying solely upon set operation to achieve scalability. The SMVAE allows for better performance compared to the latest multimodal VAE methods and can handle input modalities of variable numbers and permutations. By learning the actual multimodal joint posterior directly, the SMVAE is the first multimodal VAE method that achieves scalable co-representation with missing modalities. A high-level overview of the proposed method is illustrated in Fig.1. The SMVAE can handle a set of maximally M modalities as well as their subsets and allows cross-modality generations. Ei and Di represent the i−th embedding network and decoder network for the specific modality. µs, σs and µk, σk represent the parameters for the posterior distribution of the latent variable. By incorporating set operation when learning the joint-modality posterior, we can simply drop the corresponding embedding networks when a modality is missing. Comprehensive experiments show the proposed Set Multimodal Variational Autoencoder (SMVAE) outperforms state-of-the-art multimodal VAE methods and is immediately applicable to real-life multimodality. 2 RELATED WORK 2.1 MULTIMODALITY VAES The core problem of learning a multimodal generative model is to maintain the model’s scalability to the exponential number of modal combinations. Existing multimodal generative models such as Conditional VAE (CVAE)Pandey & Dukkipati (2017) and joint-modality VAE (JMVAE) Suzuki et al. (2016) had difficulty scaling since they need to assign a separate inference model for each possible input and output combinations. To tackle this issue, follow-up works, such as, TELBO Vedantam et al. (2017), MVAE Wu & Goodman (2018), MMVAE Shi et al. (2019), MoPoE Sutter et al. (2021), assume the variational approximation is factorizable. Thus, they focused on factorizing the approximation of the multimodal joint posterior q(z∣x1,⋯,xM) into a set of uni-modality inference encoders qi(z∣xi), such that q(z∣x1,⋯,xM) ≈ F ({xi}Mi=1), where F (⋅) is a product or mean operation, depending on the chosen aggregation method. As discussed in Sutter et al. (2021), these scalable multimodal VAE methods differ only in the choice of aggregation method. Different from those mentioned above multimodal VAE methods, we attain the joint posterior in its original form without introducing additional assumptions on the form of the joint posterior. To handle the issue of scalability, we exploit the deterministic set operation function in the noise-outsourcing process. While existing multimodal VAE methods can be viewed as typical late fusion method that combines decisions about the latent variables Khaleghi et al. (2013), the proposed SMVAE method corresponds to the early fusion method at the representation level, allowing for the learning of correlation and co-representation from multimodal data. 2.2 METHODS FOR SET-INPUT PROBLEMS Multiple instance learning (MIL) Carbonneau et al. (2018) and 3D shape recognition Su et al. (2015); Hofer et al. (2005); Wu et al. (2015), are well-known examples of weakly-supervised learning problems that deal with set-input. MIL handles training data as numerous sets of instances with only set-level labels. A typical way to solve set-level classification problems is to use pooling methods for information aggregation Shao et al. (2021). Recently, Lee et al. (2019) observed that classical feed-forward neural networks like the multi-layer perception (MLP) Murtagh (1991) cannot guarantee invariance under the permutation of the elements in the input as well as the input of arbitrary sizes. Furthermore, recursive neural networks such as RNN and LSTM Hochreiter & Schmidhuber (1997) are sensitive to the order of the input sequences, and cannot fit the multimodal case since there is no natural order for modalities. Recently, Deep Sets Zaheer et al. (2017) provided a formal definition for a permutation invariant function in set-input problems and proposed a universal approximator for arbitrary set functions. Later on, Set Transformer Lee et al. (2019) further extends this idea by using the self-attention mechanism to provide interactions as well as information aggregation among elements from an input set. However, their method only models a set of outputs as a deterministic function. Our work fills the gap between a deterministic set function to a probabilistic distribution and applies it to multimodal unsupervised learning. 3 PROPOSED METHOD 3.1 PRELIMINARIES This work considers the multimodal learning problem as a set modeling problem and presents a scalable method for learning multimodal latent variables and cross-modality generation. Given a dataset {X(i)}Ni=1 of N i.i.d. multimodal samples, we consider each of the sample as a set of M modalities observations X(i) = {x(i)j } M j=1. The multimodal data is assumed to be generated following the successive random process p(X, z) = pθ(X∣z)p(z) which involves an unobserved latent variable z. The prior distribution of the latent variable z is assumed to be pθ(z), with θ denoting its parameters. The marginal log-likelihood of this dataset of multimodal sets can be expressed as a summation of marginal log-likelihood of individual sets as log p(X(i)) as log∏Ni=1 p(X(i)) = ∑Ni=1 log p(X(i)). Since the marginal likelihood of the dataset is intractable, we cannot optimize p({X(i)}Ni=1) with regards to θ directly. We instead introduce the variational approximation qϕ(z∣X) from a parametric family, parameterized by ϕ, as an importance distribution. qϕ(z∣X) is often parameterized by a neural network with ϕ as its trainable parameters. Together, we can express the marginal log-likelihood of a single multimodal set as: log p(X(i)) = DKL(qϕ(z∣X(i))∣∣pθ(z∣X(i))) + L(ϕ, θ;X(i)) L(ϕ, θ;X(i)) = Ez∼qϕ(z∣X(i)) [log pθ(X (i)), z) − log qϕ(z∣X(i))] = −DKL(qϕ(z∣X(i))∣∣pθ(z)) + Ez∼qϕ(z∣X (i)) [log pθ(X(i)∣z)] (1) , where DKL(⋅∣∣⋅) is the Kullback-Leibler (KL) divergence between two distributions. The nonnegative property of the KL divergence term between the variational approximation qϕ(z∣X(i)) and the true posterior pθ(z∣X(i)) in the first line makes L(ϕ, θ;X(i)) the natural evidence lower bound (ELBO) for the marginal log-likelihood. The last line indicates that maximizing the ELBO is equivalent to maximizing the reconstruction performance and regulating the variational approximation using the assumed prior distribution for the latent variable. To avoid confusion, we term neural networks used for mapping the raw input observations into a fixed-sized feature vector as the embedding network while the neural network used to parameterize the variational approximation qϕ(z∣X(i)) as the encoder network. A frequently used version of the objective function is written as: argmin ϕ − βDKL(qϕ(z∣X(i))∣∣p(z)) + Ez∼qϕ(z∣X(i)) [λ log p(X (i)∣z)] (2) , where additional annealing coefficients β and reweighting coefficient λ are used in the ELBO to allow gradients and warm-up training which gradually increases the regularization effect from the prior distribution and avoids reaching local minima in the early training stage Bowman et al. (2015); Sønderby et al. (2016). We drop the superscript of X(i) to maintain brevity in the following paper. 3.2 SET MULTIMODAL VARIATIONAL AUTOENCODER In multimodal scenarios with missing modalities, we consider each sample Xs = {xi∣ithmodaltiy present} as a subset of X and the powerset P(X) denoting all the 2M combinations, such that Xs ∈ P(X). Our goal is to perform inference and generation from any number and permutation of available modalities, which requires an inference process is invariant to permutations and input of variable size. Following Definition 1, we denotes the invariant inference process as p(z∣Xs) = p(z∣π⋅Xs). The ELBO for a subset Xs can be written as Eq.3. Ls(ϕ, θ;Xs) = −DKL(qϕ(z∣Xs)∣∣pθ(z)) + Ez∼qϕ(z∣Xs) [log pθ(Xs∣z)] (3) Definition 1 Let Sn be a set of all permutations of indices 1,⋯, N , X = (x1,⋯xn) denotes n random variables. A probabilistic distribution p(y∣X) is permutation inariant if and only if for any permutation π ∈ Sn, p(y∣X) = p(y∣π⋅X), where ⋅ is the group action. The difference between L(ϕ, θ;X) in Eq.1 and Ls(ϕ, θ;Xs) in Eq.3 is that the ELBO for a subset Xs is not yet a valid bound for log p(X) by itself. Additional sampling from P(X) in the optimization objective as Eq.4 is needed for theoretical completeness. argmin ϕ ∑ Xs∼P(X) π∈Sn Ls(ϕ, θ;π⋅Xs) (4) , where π is a randomly generated permutation to the input subset Xs. However, this sampling process can be trivial if we combine the sampling of the subsets with the sampling of mini-batch during training. By assuming the Gaussian form of the latent variable z and applying the reparameterization technique, the inference process of SMVAE can be written as: p(z∣xs) ∼ N (µ, σ2), ϵ ∼ N (0, I) (5) z ∶= µ + σ ⊙ ϵ (6) µz, log σ 2 z ∶= gϕ(E1(x1),⋯, Em(xm)) (7) , where Ei are embedding network for the i th modality, gϕ(⋅) is a neural network with trainable parameters ϕ that provide the parameter for the latent’s posterior distribution (i.e., µ and σ) , ⊙ denotes the element-wise multiplication. For the generation process, it is desired to models the joint likelihood of modalities conditioned on the latent variables pθ(xs, z) = p(z)pθ(xs∣z) so that the model can utilize information from other available modalities more easier when generating a complex modality. However, for the sake of easy implementation, we assign n separate decoders D1,⋯, DM for all possible modalities as pθ(xs∣z) = [Dθ1(z),⋯, DθM (z)]. We find empirically that, without loss of generality, using L2−normalization as additional regularization to regulate the parameter oµ and σ of the inference network to 0 and 1 respectively could facilitate the learning efficiency because the gradient from the ELBO often favors the reconstruction term over the regularization term. 3.3 SET REPRESENTATION FOR JOINT DISTRIBUTION The scalability issue comes from the requirement for an inference process for the powerset P(X). We achieve scalability by using the noise-outsourced functional representation, i.e. z = g(ϵ,Xs), to bridge the gap between the deterministic set functions to a stochastic function. The properties of the deterministic function thus can be passed to the stochastic distribution under minor conditions Bloem-Reddy & Teh (2020). With such a foundation, the problem of modeling the posterior for a superset immediately reduces to designing a differentiable deterministic function that has the desired invariant or elastic properties. Specifically, we identify four critical requirements for weaklysupervised multimodal learning. Being that the model should 1) be scalable in the number of observable modalities; 2) be able to process input modalities sets of arbitrary size and permutation; 3) satisfy Theorem 1; and 4) be able to learn the co-representation among all modalities. Theorem 1 A valid set function f(x) is invariant to the permutation of instances, iif it can be decomposed in the form Φ(∑Ψ(x)), for any suitable transformations Φ and Ψ. An oversimplified example of a set function can be summation or product as done in MVAE Wu & Goodman (2018) and MMVAE Shi et al. (2019). Pooling operations such as average pooling or max pooling also fit the definition. However, these set aggregation operations will require additional factorization assumptions to the joint posterior and ultimately forbid the VAE to learn corepresentation of the input modalities as aggregation is only applied at the decision level. To establish the inductive bias of inter-modality correlation, the self-attention mechanism without positional embeddings is a reasonable choice Edelman et al. (2022); Shvetsova et al. (2022). Therefore, the proposed SMVAE leverages self-attention as the deterministic set function to aggregate embeddings of multimodal inputs. Given the query Q, key K and value V , an attention function is denoted as Att(Q,K, V ) = ω(QK T √ dk )V , where K ∈ Rm×dk and V ∈ Rm×dv are m vectors of dimension dk and dv , Q ∈ R n×dq are n vectors of dimension dq , ω is the softmax activation function. In our case, the key-value pairs represent the m available embeddings of input modalities, m ≤ M . Each embedding is mapped to a d−dimensional embedding space by a modality-specific embedding network. By measuring the compatibility of the corresponding key and the query Q, information that is shared among modalities is aggregated as co-representation. In practice, we utilize the multi-head extension of self-attention denoted as MultiHead(Q,K, V, h) = Concat(A1,⋯, Ah)W o, where Ai = Atti(QWQi ,KW K i , V W v i ) is obtained from the ith attention function with projection parameters WQi ∈ R (d/h)×dq ,WKi ∈ R (d/h)×dk , WVi ∈ R (d/h)×dk and W o ∈ Rdv×d, h denotes the total number of attention heads and d denotes the dimension of the projections for keys, values and queries. Inspired by Lee et al. (2019), we design our deterministic set representation function gϕ(Xs) as follows: gϕ(Xs) ∶= H + fs(H) H = I +MultiHead(I,Xs,Xs, h) (8) , where I ∈ R1×dv is an dv-dimensional trainable vector as the query vector for multimodal embeddings. fs is a fully-connected layer. By calculating attention weights using I and each embedding. Not only does I work as an aggregation vector that regulates the number of output vectors from gϕ(Xs) to be constant regardless of the number of input embeddings, but also it selects relevant information from each embedding base on similarity measurement. The former justifies gϕ(Xs) as a suitable permutation invariant set-processing function while the latter yields the desired co-representation among modalities. Finally, Since the set representation function gϕ(Xs) is invariant to the input permutations of different input sizes, we achieved an invariant inference probabilistic function that satisfies Definition 1 through the noise-outsourced process as shown in Eq. 6. Thus, by introducing the set representation function in the noise-outsourced process, the SMVAE is readily a scalable multimodal model for any subsets of modalities. 3.4 TOTAL CORRELATION OPTIMIZATION WITHOUT CONDITION INDEPENDENCE The lower bound of the multimodal data without factorizing the joint posterior (i.e., Eq. 1) provides additional information about the correlations of modalities during the optimization process compared to factorized methods. It is noteworthy that both MVAE and MMVAE depend on the assumption of conditional independence between modalities in factorization. Without loss of generality, the relation between L(ϕ, θ;X) and the factorized case LCI can be shown in Eq. 9. L(ϕ, θ;X) = Eqϕ(z∣X) [log pθ(z)∏Mi=1 pθ(xi ∣ z) qϕ(z ∣ X) + log pθ(X, z) pθ(z)∏Mi=1 p(xi ∣ z) ] = LCI+ Eqϕ(z∣X) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] (9) , where X ≡ (x1,⋯,xM) and LCI is the lower bound for factorizable generative process as MVAE or MMVAE. Specifically, let q(X) denotes the empirical distribution for the multimodal dataset, we have: Eq(X) [L(ϕ, θ;X)] = Eq(X) [LCI] + Ez∼ q(X)qϕ(z∣X) pθ (X∣z) ⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ EX∼pθ(X∣z) [log pθ(X ∣ z) ∏Mi=1 pθ(xi ∣ z) ] ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ conditional total correlation ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (10) , which reveals that without assuming a factorizable generative process and enforcing conditional independence among modalities, our optimization objective naturally models the conditional total correlation which provides information of dependency among multiple input modalities Watanabe (1960); Studenỳ & Vejnarová (1998). Therefore, the SMVAE has the additional advantage of learning correlations among different modalities of the same event, which is also what we desired for good co-representation. 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS We make use of uni-modal datasets including MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017) and CelebA Liu et al. (2015) to evaluate the performance of the proposed SMVAE and compare with other state-of-the-art methods. We convert these uni-modal datasets into bi-modal dataset by transforming the labels to one-hot vectors as the second modality as in Wu & Goodman (2018); Suzuki et al. (2016). For quatitative evaluation, we denote x1 and x2 as the image and text modality and measure the marginal log-likelihood, log p(x) ≈ logEq(z∣⋅)[p(x∣z)p(z)q(z∣⋅) ], the joint likelihood log p(x,y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ], and the marginal conditional probability, log p(x∣y) ≈ logEq(z∣⋅)[p(z)p(x∣z)p(y∣z)q(z∣⋅) ]−logEp(z)[p(y∣z)], using data samples from the test set. q(z∣⋅) denotes the importance distribution. For all the multimodal VAE methods, we keep the architecture of encoders and decoders consistent for a fair comparison. Detailed training configurations and settings of the networks are listed in Appendix. B. The marginal probabilities measure the model’s ability to capture data distributions while the conditional log probability measures classification performance. Higher scoring of these matrics means the better a model is able to generate proper samples and convert between modalities. These are the desirable properties for learning a generative model. 4.2 GENERATION QUALITY AND QUANTITATIVE EVALUATION We obtain 1000 important samples to estimate the probability metrics. Table 1 shows the quatitative results of the proposed SMVAE for each dataset. We can see that the SMVAE outperforms other methods in almost all metrics. The outstanding of SMVAE mainly contributes to the direct modeling of the joint posterior distribution and optimization on a more informative objective. Fig. 3, Fig. 4 and Fig. 5 show cross-modality generation of image samples for each domain generated by the SMVAE model. We can see that given the text modality only, the SMVAE can generate corresponding images of good quality. We further visualize the learned latent representation using tSNE Hinton & Roweis (2002). As shown in Fig.2, latent space learned by MVAE method can only produce cohesive latent representation when both modalities are presented. When one modality is missing, representations from their method are distributed irrespective of the semantic category of the data. On the other hand, although the MMVAE method achieves cohesive representation for single-modality posterior, their joint representation is less discriminative. Indicating that using only the combination of uni-modal inference networks is insufficient to capture intermodality co-representation. Nonetheless, our SMVAE method can achieve discriminative latent space for both single- and joint-modality inputs thanks to its ability to exploit shared information from different modalities. 4.3 CASE STUDY: COMPUTER VISION APPLICATION We demonstrate that our SMVAE is able to learn image transformations including colorization, edge detection, facial landmark segmentation, image completion, and watermark removal. With original image and each transformation as different modalities, we obtain 6 modalities in total by applying different transformations to the ground-truth images for this multimodal setting. This case study demonstrates the SMVAE’s ability to generate in multiple directions and combinations. Similar to Wu & Goodman (2018), for edge detection, we use Canny detector Canny (1986) from Scikit-Image module Van der Walt et al. (2014) to extract edges of the facial image. For facial landmark segmentation, we use Dlib tool King (2009) and OpenCV Bradski & Kaehler (2000). For colorization, we simply convert RGB colors to grayscale. For watermark removal, we add a watermark overlay to the original image. For image completion, we replace half of the image with black pixels. Fig.6 shows the samples generated from a trained SMVAE model. As can be seen in Fig.6(a), the SMVAE generates a good reconstruction of the facial landmark segmentation and extracted edges. In Fig.6(b), we can see that the SMVAE is able to put reasonable facial color to the input grayscale image. Fig.6(c) demonstrates that the SMVAE can recover the image from the watermark and complete the image quite well. The reconstructed right half of the image is basically agreed on the left half of the original image. In Fig.6(d), all traces of the watermark is also removed. Although our reconstructed images suffer from the same blurriness problem that is shared in VAE methods Zhao et al. (2017), the SMVAE is able to perform cross-modality generation thanks to its ability to capture share information among modalities. 4.4 CASE STUDY: ROBOTICS CONTROL APPLICATION The second case study shows that our method is readily applicable in robotics control scenarios using Vision&Touch datasetLiang et al. (2021). We use the SMVAE to learn cross-modality generation from continuous sensory input to images. Emerging human-in-the-loop shared autonomy systems are often equipped with multiple sensors, which pose a high requirement to the model’s ability to learn co-representationLee et al. (2020); Luo et al. (2021); Chen et al. (2021); Selvaggio et al. (2021); Newman et al. (2022); Li et al. (2021). The Vision&Touch dataset is a real-world robot manipulation dataset that contains visual, tactile, control action, and robot proprioception data which pocess more diverse modalities. The robotic arm attempts to insert the peg located on its tip into the target object. We use a total of 4 modalities including the depth images, RGB images, the 6-axis force sensor feedbacks, and the control action given to the robotics arm in each time step. Fig. 7(a) illustrates that as the robotic arm is not receiving force signals in early steps, reconstruction results of the RGB image show clearly that the arm has no contact with the taget box below. Only when the robotic arm is receiving high force readings, the generated image depicts the contact between the robotics arm and the target box. The quality of the reconstructed rgb and depth images is also differ between partial observation and full observation. While only limited information is observed (i.e., force and action inputs), our method is only able to reconstruct rgb and depth images that can properly reflex the relative posistion between the robotic arm and the target object (Fig. 7(a)). But when more information is presented, the latent variables can have more comprehensive information about the event and better reconstruction result as we removed the conditional independence assumption (Fig. 7(b)). 5 CONCLUSION This paper proposes a multimodal generative model by incorporating the set representation learning in the VAE framework. Unlike the previous multimodal VAE methods, the proposed SMVAE method provides a scalable solution for multimodal data of variable size and permutations. Critically, our model learns the joint posterior distribution directly without additional assumptions for factorization, yielding a more informative objective and the ability to achieve co-representation between modalities. Statistical and visualization results demonstrate that our method excels with other state-of-the-art multimodal VAE methods. Which has high potential in emerging multimodal tasks that need to learn co-representation of diverse data sources while taking missing modality problems or set-input processing problems into consideration. Application on cross-modality reconstruction in robotic dataset further indicates the proposed SMVAE has high potential in emerging multimodal tasks. In the future, we will explore methods that extend the current SMVAE framework to more diverse modalities as well as dynamic multimodal sequences to provide solutions for real-world multimodal applications.
1. What is the main contribution of the paper regarding multimodal representation learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to learn correlations between input modalities? 3. Do you have any concerns or questions about the mathematical formulation of the proposed method, such as the parametrization of the prior or the training process? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially compared to other works in the field? 5. Are there any typos or confusing aspects in the paper that need to be addressed, such as the distinction between embedding and encoder networks or the introduction of the multi-head attention mechanism?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The work presented in this paper targets multimodal representation learning, following a line of work that uses a variational autoencoder (VAE) as a basis that is extended to learn a latent representation from multiple modalities, that is observations of the same phenomenon through the lenses of a variety of input sources. The objective is to build a model that scales to a potentially large number of input modalities, and that is capable of reconstructing the inputs, even in the case of missing modalities. The training objective is an extension of the traditional VAE ELBO, that considers subsets of input modalities at training time. Instead of using either product or mixture of experts (or combinations thereof) to compute the joint posterior distribution, the authors propose to use a multi-head self attention layer. The claim is that the proposed approach does not require the assumption of conditional independence of a modalities given the latent variable, thus allowing to learn correlations between input modalities, which are encoded in the latent representation. A series of experiments on synthetic multimodal datasets, that extend classical datasets such as MNIST, FashionMNIST and CelebA to account for a one-hot-encoded of the available labels as additional modalities, the authors compare the proposed method to alternatives from the literature (MVAE, MMVAE and JMVAE), using likelihood and conditional variants. Finally, the authors discuss two real-life use cases, providing visual examples of the benefit of the proposed method. Strengths And Weaknesses The strength are: This work addresses an important topic in representation learning The weaknesses are: The editorial quality must be drastically improved, both on clarity of exposition, and on mathematical rigor. Several aspects of the mathematical formulation of the proposed method are elusive. The parametrization of the prior with parameters θ is not what you actually do. The prior has no parameters in the remaining of the paper. Eq.1 has a typo. The distinction between embedding and encoder networks is not clear. Eq.3 has a typo. Training process is not clear: do you assume to have all input modalities perfectly aligned, but omit some of them at training time? The introduction of function g ( ⋅ ) in eq.7 is confusing. Are we talking about the encoder network? The “noise outsourced functional representation” is not clear: I assume this is the latent variable. The way the multi-head attention mechanism is used is obscure: keys, values and queries are not clearly specified. The deterministic set representation function is indicated again by the function g ( ⋅ ) , which is confusing. The section on total correlation is (to the best of my understanding) not correct. A recent article explicitly tackling multimodal representation learning with a total correlation objective is, for example, [1] [1] @inproceedings{hwang2021-neurips, title={Multi-View Representation Learning via Total Correlation Objective}, author={HyeongJoo Hwang and Geon-Hyeong Kim and Seunghoon Hong and Kee-Eung Kim}, booktitle={Advances in Neural Information Processing Systems}, editor={A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan}, year={2021}, url={https://openreview.net/forum?id=SV4NhqUoO8} } The experimental section is very weak, especially compared to the literature on the topic, including some of the references cited in the paper, such as Shi et al. ‘19. The datasets are simplistic, compared to the current state of the art, and the metrics used to compare methods are insufficient, see for example Shi et al. ‘19 and subsequent work from the authors, as well as Sutter et al. 20, and subsequent work from the same group. Clarity, Quality, Novelty And Reproducibility This paper requires additional work to improve clarity, mathematical rigor, and reproducibility (the key novelty of the paper is the use of self-attention, which is not clearly explained). There is some merit in terms of novelty, as the key idea of this work is to generalize even further than what has been done in the literature, the function to obtain the joint approximate posterior using self-attention. The problem is that it is not clear how this can work.
ICLR
Title Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks Abstract Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine where the victim’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having observed only one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pretrained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding DNNs’ vulnerability to cache side-channel attacks. 1 INTRODUCTION Deep neural networks (DNNs) have become an essential tool in various applications, such as face recognition, speech recognition, malware detection, and autonomous driving or aviation (Parkhi et al., 2015; Amodei et al., 2016; Arp et al., 2014; Chen et al., 2015; Smolyanskiy et al., 2017). A DNN’s performance depends widely on the network architecture—the number and types of layers, how the layers are connected, and the activation functions—and, unfortunately, there is no universal architecture that performs well on all tasks. Consequently, researchers and practitioners have devoted substantial efforts to design various DNN architectures to provide high performance for different learning tasks. Owing to their critical role, DNN architectures represent attractive targets for adversaries who aim to mount DNN fingerprinting attacks. In such an attack, the adversary probes a DNN model, considered confidential, until she infers enough attributes of the network to distinguish it among other candidate architectures. In addition to revealing valuable and secret information to the adversary, DNN fingerprinting can enable further attacks on black-box models. While the prior work on adversarial machine learning often assumes a white-box setting, where the adversary knows the DNN model under attack, these attacks are usually unrealistic in practice (Suciu et al., 2018). In consequence, researchers have started focusing on a black-box setting, where model architecture is unknown to the adversary. However, in this setting, the adversary often makes some assumptions about the victim model in order to craft successful adversarial examples (Papernot et al., 2017). Instead of approxi- mating, the adversary can start by conducting a DNN fingerprinting attack to infer the information required about the model, then use this information to craft adversarial examples that can evade the model. This can also enable model extraction attacks (Tramèr et al., 2016; Kurakin et al., 2016; Wang & Gong, 2018) and membership inference or model inversion attacks (Shokri et al., 2017; Long et al., 2018). Because of the large number and types of architectural attributes, and the subtle effect that each attribute has on the model’s inferences, DNN fingerprinting is challenging when using the typical methods employed in the adversarial machine learning literature. For example, Wang & Gong (2018) propose a hyperparameter stealing attack that requires knowledge of the training dataset, the ML algorithm, and the learned model parameters, yet is unable to extract the model architecture. Wang et al. (2018) demonstrate a fingerprinting attack against transfer learning; however, they rely on the assumption that the teacher model and learning parameters are known to the attacker. To overcome these challenges, recent work has started to investigate attacks that utilize information leaked by architectural side-channels on the hardware where the DNN model runs. Hua et al. (2018) extract the network architecture of a model running on a hardware accelerator by monitoring off-chip memory addresses. Yan et al. (2018) reduce the search space from 1035 to 16 candidates within a given network architecture by exploiting cache side-channels. In this paper, we ask the question: how vulnerable are DNNs to side-channel attacks, and what information do adversaries need for architecture fingerprinting? We perform, to the best of our knowledge, the first security analysis of DNNs operating in the presence of cache side-channel attacks. Specifically, we define the threat model for these attacks, including the adversary’s capabilities and limitations. We then introduce DeepRecon, an efficient attack that reconstructs a black-box DNN architecture by exploiting the Flush+Reload (Yarom & Falkner, 2014) technique, and we further evaluate the importance of specific architectural attributes in the success of fingerprinting. Finally, we propose and evaluate new framework-level defenses against these attacks. Our attack works by targeting lines of code corresponding to the execution of specific network architecture attributes of a deep learning (DL) framework. Specifically, these lines of code correspond to instructions to execute functions that are mapped into the instruction cache when the functions are invoked. Once these lines of code are identified, our attack flushes them from the instruction cache shared by the attacker and the victim. The attacker waits for the victim’s process to run and then measures the time it takes to re-access those same lines of code. If the victim’s DNN model has accessed any of these particular functions, the corresponding lines of code will be present in the instruction cache when the attacker tries to re-access them. Therefore, the access time to call these functions will be measurably faster than if the victim had not loaded them back into the shared instruction cache. On the other hand, if the victim DNN model did not access these particular functions, the corresponding lines will not be present in the cache when accessed by the attacker, and thus the access time will be measurably slower. We show that from this seemingly small amount of information that is leaked to the attacker, much of the victim’s DNN architecture can be extracted with no query access required. To launch this attack, we only assume that: 1) an attacker and a victim are co-located in the same machine, and 2) they use the same shared DL framework. In evaluations, we demonstrate that, by learning whether or not specific functions were invoked during inference, we can extract 8 architecture attributes across 13 neural network architectures with high accuracy. Based on the extracted attributes, we demonstrate how an attacker can reconstruct the architectures of two common networks, VGG16 (Simonyan & Zisserman, 2014) and ResNet50 (He et al., 2016) as proof of concept. We also demonstrate a useful example of DeepRecon through model fingerprinting in a transfer learning attack. Finally, we propose countermeasures to obfuscate an attacker from extracting the correct attributes and sequences using observation attacks like DeepRecon and show that these defenses significantly increase the errors in the extracted attributes and can be implemented in various DL frameworks without hardware or operating system support. 2 BACKGROUND As opposed to attacks that exploit vulnerabilities in software or algorithm implementations, sidechannel attacks utilize information leaks from vulnerabilities in the implementation of computer systems. Due to modern micro-processor architecture that shares the last-level cache (L3 cache) between CPU cores, cache side-channel attacks have become more readily available to implement. Since the cache is involved in almost all the memory access activities on a machine, it can be a medium that includes abundant information about programs running on the host. The fundamental idea of the attack is to monitor the access time to the shared contents, e.g., shared libraries or credentials, between a victim and an attacker while the attacker fills the cache set with the addresses known to her (Prime+Probe (Liu et al., 2015)) or keeps flushing the shared data from the cache (Flush+Reload (Yarom & Falkner, 2014)). In both the cases, once the victim accesses memory or shared data, the attacker can identify which memory addresses or shared data is accessed. Prior work has demonstrated that, with cache side-channels, an attacker can construct covert channels between processes, stealing cryptographic keys, or breaking the isolation between virtual machines (Zhang et al., 2014; Liu et al., 2015). FLUSH+RELOAD Our attack leverages the Flush+Reload technique, which monitors accesses to memory addresses in shared contents. The technique assumes that an attacker can run a spy process on the same host machine. This enables the attacker to monitor the shared data or libraries between her and the victim. During monitoring, the attacker repeatedly calls the clflush assembly instruction to evict the L3 cache lines storing shared content and continually measures the time to reload the content. A fast reload time indicates the data was loaded into the cache by the victim whereas a slow reload time means the data is not used. From this information, the attacker determines what data is currently in use and identifies the control flow (order of function calls) of the victim’s process. We chose Flush+Reload over Prime+Probe since the results from Flush+Reload produce less noise. ATTACKS ON BLACK-BOX DEEP NEURAL NETWORKS Prior work has proposed various methods to attack black-box DNNs. Tramèr et al. (2016) and Papernot et al. (2017) demonstrated model extraction attacks on black-box DNNs that aim to learn a substitute model by using the data available to the attacker and observing the query results. Fredrikson et al. (2015) and Shokri et al. (2017) demonstrated model inversion attacks on black-box DNNs that reveal a user’s private information in the training data leveraging model predictions. Wang & Gong (2018) proposed a hyper-parameter stealing attack that aims to estimate the hyper-parameter values used to train a victim model. However, these attacks require unrealistic conditions, e.g., the architecture of the victim network needs to be known to attackers, or the victim uses a network with simple structures, such as multi-layer perceptrons. Thus, the capability of DeepRecon attack that reconstructs black-box DNN architectures can bridge the gap between the realistic black-box scenario and their conditions. RECONSTRUCTING BLACK-BOX DNNS VIA SIDE-CHANNELS Recent studies have discovered various methods to extract the architecture of a black-box DNN. Memory and Timing Side-Channels: Hua et al. (2018) monitored off-chip memory accesses to extract the network architecture of a victim model running on a hardware accelerator. They estimated the possible architecture configurations and extracted model parameters. However, the attack requires physical accesses to the hardware, whereas our attack does not. Power Side-Channel: Wei et al. (2018) demonstrated that an attacker can recover an input image from collected power traces without knowing the detailed parameters in the victim network. However, this approach also assumed an attacker who knows the architecture of a victim network, so our attack could help meet the assumptions of this attack as well. Cache Side-Channel: Concurrent work by Yan et al. (2018) demonstrates that an attacker can reveal the architecture details by reverse engineering and attacking generalized matrix multiply (GeMM) libraries. However, GeMM-based reverse engineering can only reveal the number of parameters of convolutional or fully connected layers because others such as activation and pooling layers are difficult to characterize by matrix multiplications. Also, in order for the monitored functions in GeMM libraries to be in a shared instruction cache of an attacker and a victim, the multiplications must occur on the CPU. However, DeepRecon can be performed independent of the hardware on which the computations occur, generalizing better common hardware on which DNNs run (e.g., GPUs). Using Known Student Models: Wang et al. (2018) proposed a transfer learning technique in which an attacker identifies teacher models by using known student models available from the Internet. This approach assumed that the victim selects the teacher from a set of known architectures. We, however, take this a step further and fingerprint families of architectures as well as many commonly known teacher models. Additionally, we are able to reconstruct arbitrary teacher model architectures with high accuracy. Meta-Models: Oh et al. (2018) demonstrated that an attacker can estimate the victim’s architecture by using a brute-force approach and meta-models. They first trained all the possible architectures of a given set and pruned the models with inferior performance. Then, they trained a meta-model that identified the network architecture using mutated samples and labels. However, the pruning process is time intensive (i.e., 40 GPU days for 10k candidates of LeNet (LeCun, 1998)), and the candidates were selected from limited architectural choices, whereas we again go a step further in identifying families of architectures and can generalize to previously unknown teacher models. 3 DEEPRECON ATTACK 3.1 THREAT MODEL Our threat model requires an attacker who can launch a co-located user-level process on the same host machine as the victim. This ensures the attacker and the victim’s process share the same instruction cache. This co-location also allows our attacker to observe the victim DNN’s behavior without actively querying the model, avoiding the common assumption of query access in the literature on black-box attacks. Consider the example of any computer that an attacker has access to at a user-level, the attacker can log into this machine and attack other users with DeepRecon. Another way for an attacker to achieve co-location is to disguise her process as a benign program such as an extension for a browser. Once some victims install the extension in their browser, the attacker can easily launch a monitoring process. We also assume that the attacker and victim use the same opensource DL frameworks shared across users. Importantly, this assumption can be easily met because many popular DL frameworks such as Tensorflow (Abadi et al., 2016) or PyTorch1 are provided as open-source libraries, and this practice of sharing libraries across users is default on major operating systems, e.g., Windows, MacOS, and Ubuntu. Thus, our attacker can identify the addresses of functions to monitor in the instruction cache by reverse-engineering the shared framework’s code. Motivating Attack Example: We provide a practical example where our threat model is applicable. Suppose an attacker aims to install malware on a victim’s machine where an anti-virus system, based on a DNN model, is running. To evade malware detection in common black-box attacks such as the attack proposed in Ilyas et al. (2018), an attacker needs to drop crafted programs actively to monitor the model’s decisions and synthesize an evasive sample based on the collected data. However, when the attacker drops multiple files, her behavior can be detected by the victim. This is further amplified by the need to query the model repeatedly to craft any more malicious files. On the other hand, our attacker induces the victim to install a chrome add-on (which runs at a userlevel) that passively monitors cache behaviors of the model and extracts the architecture. Then, the attacker trains a surrogate model with public datasets (including malware and benign software). With the surrogate model, the attacker crafts her malware that evades detection and can continue to craft malicious files that will be classified as benign offline and without any further observations. As opposed to common black box attacks, our attacker lowers the possibility of being caught because she only monitors the victim model while it is in use and does not need to query the model. 3.2 ATTACK OVERVIEW The overview of DeepRecon attack is described in Fig. 1. The victim’s behaviors are depicted with the dotted lines (black), and the attacker’s actions are described with the solid lines (red). While preparing the attack offline, the attacker first analyzes the deep learning framework that the victim uses and collects the target functions corresponding to the architecture attributes that the attacker wants (Table 1). Then later, the attacker launches a co-located process at the user-level that runs along with the victim’s process on the same host machine. When the victim’s process runs training 1https://pytorch.org or predictions with its model, the target functions are invoked and the instructions that call them are loaded into the shared instruction cache. The attacker periodically flushes the cache lines and measures the access time to the target instructions. If the victim invokes any of the target functions after flushing, the following access time measured by the attacker will be measurably faster than if the victim does not invoke them. The attacker collects the number and sequence of invocations and then extracts the victim model’s architecture attributes. Then, the attacker reconstructs the victim model’s architecture. 3.3 REVERSE ENGINEERING In Table 1, we analyze the TensorFlow v1.9.0-rc0 framework2 and list the target functions corresponding to the architecture attributes. We choose TensorFlow due to its popularity as an open source machine learning (ML) framework, but believe that the methods we describe will be applicable to most, if not all, other popular frameworks. In addition to having found some corresponding functions in another popular framework, PyTorch/Caffe2, our attack leverages the inherent structure of a scalable and widely deployable ML framework, namely library layer abstraction. All the of the functions we monitor in TensorFlow are in the core of the library, both below the API interface and above the system dependent code. Because of this, our attack not only does not depend on the specific TensorFlow API a victim uses but also is agnostic to the type of processing hardware the victim is using, from a single CPU to a cluster of GPUs. The specific functions we monitor in Table 1 represent two subgroups: those corresponding to control flow and those corresponding to architecture attributes. The control flow functions allow us to observe the number of queries the victim makes to the model and the number of layers that are updated by gradient decent if we observe the model when it is being trained The function that monitors the number of queries is especially important, as it allows us to separate individual observations. The architecture attribute functions are called once per instance of an architecture attribute being present in the neural network, allowing us to see the number of each attribute and the sequence in which they occur in the victim’s architecture. Combined, these functions allow us to observe the architecture attributes of a neural network from start to finish on a given observation. Additionally, the bias operator gradient function, given in the table by #grads, can allow an attacker to figure out the total number of layers that are updated during training time if the attacker observes 2https://github.com/tensorflow/tensorflow/releases/tag/v1.9.0-rc0 Type Code Stage Func. Name Location in TensorFlow Code Control Flow #queries T/I RunCallable() core/common runtime/session ref.cc [line: 154] #grads T compute() core/kernels/bias op.cc [line: 218] Arch. Attributes #convs T/I operator() core/kernels/conv ops.cc [line: 122] #fcs T/I compute() core/kernel/matmul op.cc [line 451] #softms T/I compute() core/kernels/cwise ops common.h [line: 240] #relus T/I compute() core/framework/numeric op.h [line: 58] #mpools T/I compute() core/kernels/pooling ops common.h [line: 109] #apools T/I compute() core/kernels/avgpooling op.cc [line: 76] #merges T/I compute() core/kernels/cwise ops common.h [line: 91] #biases T/I compute() core/kernels/bias op.cc [line: 98] (Note that T stands for the training, and I indicates the inference.) Table 1: Target Functions. The monitored functions in the TensorFlow framework (v1.9.0-rc0). Each function corresponds to a control flow or an attribute. [Note that the codes are the number of queries (#queries), gradient updates (#grads), convolutional layers (#convs), fully connected layers (#fcs), softmaxs (#softms), ReLUs (#relus), max poolings (#mpools), avg. poolings (#apools), merge operations (#merges), bias operations (#biases).] the training of the model. Using this information, the attacker, already knowing the total number of layers in the architecture, can find the point at which the victim is freezing the backpropagation. This allows the attacker to know which layers are directly inherited from the training model and which layers are specifically trained by the victim. The relevance of this will be discussed in our application of DeepRecon to model fingerprinting (Sec. 4). Limitations. Similar to concurrent work (Yan et al., 2018), we are also able to extract additional information, such as the number of parameters in convolutional and fully connected layers by monitoring the matrix multiplications in the Eigen library3 on which TensorFlow is built. This attack provides more fine-grained information, but it does not generalize to computations on hardware other than a CPU. Also, we examine whether our attack can recover the inputs to the model and its parameters. By varying inputs and parameters while monitoring the functions used to compute these parameters using a code coverage tool, GCOV4, we find that the framework implements matrix multiplications of parameters in a data-independent way. Thus, we are unable to estimate the inputs and parameters of a victim model. We hypothesize that this is a general limit of cache based side-channel attacks on DNNs that target instructions, and that obtaining the parameters is reducible to the problem of reading arbitrary victim memory. 3.4 EXTRACTING ARCHITECTURE ATTRIBUTES We run our attack on Ubuntu 16.04 running on a host machine equipped with the i7-4600M processor (8 cores and 4MB L3 cache). Our victim and attacker processes are running at the user-level on the same operating system (OS). Both the processes utilize the TensorFlow v1.9.0-rc0 framework. The victim uses the DNN model to make predictions, and the attacker launches the Flush+Reload attack using the Mastik toolkit (Yarom, 2016) to monitor the target functions at the same time. A total of 13 convolutional neural network (CNN) architectures are considered in our experiment: DenseNet121, 169, 201 (Huang et al., 2017), VGG16, 19 (Simonyan & Zisserman, 2014), ResNet50, 101, 152 (He et al., 2016), InceptionV3, InceptionResNet (Szegedy et al., 2015), Xception (Chollet, 2017), MobileNetV1, and MobileNetV25 (Howard et al., 2017). Table 2 includes the extraction results from monitoring VGG16 and Resnet50. The full extraction results from the 13 networks are in Appendix D. We first show the results from a Short attack, where an attacker can only run her process on a short interval of time, observing only a single query of the 3http://eigen.tuxfamily.org/index.php 4https://gcc.gnu.org/onlinedocs/gcc/Gcov.html 5Note that we use alpha = 1.0 for both. network. We randomly choose ten individual queries and average the attributes. We report errors as the sum of absolute deviations from ground truths. In VGG19, our attacker has 2.6 errors on average and 3.1 in ResNet50. We also show the extraction results from 10 continuous observations (L), in which the attacker runs her process for a more extended period of time. The error rates in both the networks are similar. These results demonstrate that DeepRecon achieves better accuracy by only observing a running network than prior work that assumes query access (Oh et al., 2018). 3.5 RECONSTRUCTING THE ARCHITECTURE OF BLACK-BOX DEEP NEURAL NETWORKS Based on the extracted architecture information, DeepRecon reconstructs the entire DNN architecture of the victim model. In these examples, we focus on the fact that most CNN architectures consist of the feature extractor and classifier layers. The feature extractor is located in the earlier layers and is the combination of basic building blocks. The classifier is a set of fully connected layers at the end of the network. In VGGs and ResNets, there are standard blocks used in each CNN architecture as we depict in Fig. 2. Each block includes activation layers with preceding convolutional layers. In the classifier layers, each fully connected layer is followed by an activation layer. We describe the reconstruction process of ResNet50 in Table 3. (Note that we also reconstructed VGG16 without errors and show the result in Appendix A.) In this table, we compare the computation sequences observed by the attacker with the actual computations in ResNet50. We can see the sequences are accurately captured with a few errors. The three steps at the bottom describe the reconstruction processes of our attacker. Our attacker first identifies (1) the number of blocks by counting the (max-)pooling layers. Once the attacker separates blocks with the pooling layer locations, she counts (2) the number of convolutional layers in each block. In ResNets, we know that the Residual block has four convolutional layers, and the Identity block has three convolutional layers in each. Thus, the attacker can identify the type of each block. After that, the attacker estimates (3) the number of fully connected layers at the end. Finally, with this block-level information, our attacker successfully estimates the victim architecture is the ResNet50 with high accuracy. Discussion about the Reconstruction Errors. We also examine whether the errors in our experiments have specific patterns, allowing our attacker to filter them out. However, we could not find any pattern: the types and locations of error attributes are different in each run (over 10 runs). Thus, Arch. Data Computations Sequences (Layers in the Ground Truth) ResNet50 G CR PM CR CR C C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR PA FSo S CR PM C CR CR C MR CR CR C MR CR CR C MR C CR CR C MR CR CR ... MR CR CR C MR CR CR C MR C CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR R CR C MR C CR CR C MR CR CR C MR CR CR C MR PA FSo Recon. Steps Details ResNet50 Recon. (1) Block 1. Block 2. Block 3 Block 4. Block 5. Block 6. Block 7. Block 8. Block 9. Block 10. Block 11. Block 12. Block 13. Block 14. Block 15. Block 16. Block 17. Block 18. (2) Input Residual Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Fully Connecteds (3) ResNet 50: Configuration with 50-Layers (Note that C,P ,F ,M indicate the Convolutional, Pooling, Fully connected, Merge layers, and the subscripts mean the activations (R: ReLU and So: Softmax).) Table 3: Reconstruction Process of ResNet50 Architecture. We list the computation sequences captured by our attack in the above rows and the reconstruction process at the bottom rows. The errors in capturing the correct computation sequences by our attacker are marked as bold and red. we attribute these errors to two primary causes. First, there can be background noise from other processes that our Flush+Reload attack picks up, e.g., a process can pull data into the L3 cache and evict the target function between when the victim calls the function and we reload it. In this case, our attacker cannot observe the victim calling the function. Second, our attack can experience common errors associated with the Flush+Reload attack (Yarom & Falkner, 2014), e.g., a victim invokes the target function when we reload, causing our attacker to see a cache miss instead of correctly observing a cache hit. 4 FINGERPRINTING BLACK-BOX NEURAL NETWORKS Our attacker identifies victim neural network architectures using statistical models trained on the attributes as features and the architectures as labels. This is a powerful capability to have if our attacker aims to fingerprint the architecture of the pre-trained models used in transfer learning. Transfer learning is typically done in a fine-tuned manner: a user creates a student model that uses the architecture and parameters of a teacher model and trains only a few fully connected layers at the end of the network by freezing the backpropagation of the preceding layers. We also found that our attacker can learn the layers in the student model not updated during training when they observe the training process (Sec. 3.3). Thus, our attacker can extract both the network architecture and frozen layers of the student (victim) model. Once our attacker identifies the victim network’s teacher model with this information, the attacker can utilize several pre-trained models available from the Internet to perform further attacks. For instance, an attacker can increase the success rate of black-box attacks by crafting adversarial samples with the internal representation from the teacher models (Wang et al., 2018). Additionally, since adversarial samples transfer across different models for the same task (Tramèr et al., 2017), the attacker is not required to use the exact same teacher model—i.e., she can use any pre-trained model of an architecture family that achieves similar accuracy on a task. An attacker can also perform model extractions easily. This is because the model parameters from the pre-trained models can be readily found on the Internet as well and are used in a victim model in this setting. Finally, if the attacker has a partial knowledge of the victim’s training data and can gain the knowledge of which layers were frozen during training (see Sec. 3.3), she can fully estimate the parameters of the entire network by independently training the last few layers that were not frozen. To evaluate our fingerprinting attack, we train decision tree classifiers on the 13 networks used in Sec. 3.4 to identify the network architectures using the extracted attributes and labels. We extract the attributes over 50 observations of each network (650 in total) and utilize 5-fold cross-validations. We Task Networks Acc. [Avg.] Important Attributes Total - 1.0 [0.9046] #relus [0.2575] #merges [0.2534] #convs [0.2497] #biases [0.1034] Family - 1.0 [0.9938] #relus [0.4621] #convs [0.4421] #mpools [0.3382] #apools [0.2752] Arch. Variants V 1.0 [0.9867] #relus [0.6982] #convs [0.6982] #biases [0.6898] - R 1.0 [0.9900] #relus [0.6399] #merges [0.6399] #convs [0.6399] #biases [0.3750] D 1.0 [0.9867] #relus [0.6399] #merges [0.6399] #convs [0.6100] - I 1.0 [1.0000] #convs [0.6399] #merges [0.6399] #apools [0.5875] #biases [0.3373] M 1.0 [1.0000] #relus [0.6982] #convs [0.6982] #fcs [0.6595] #softms [0.6228] (Note that V, R, D, I, and M indicate VGGs, ResNets, DenseNets, InceptionNets, and MobileNets.) Table 4: Fingerprinting Performance and Important Attributes. Each row corresponds to each task. We list the accuracy of the best classifiers and the essential attributes based on the MI scores, denoted by the numbers in brackets. measure the classification accuracy and analyze the four most essential attributes based on mutual information (MI) scores. Since the attributes are not affected by the host machines or operating systems, the attacker can train the models offline for use in attacks. Table 4 shows the results of fingerprinting the neural networks. We conduct three types of classification tasks with the aim of identifying 1) the entire 13 networks, 2) 5 network families, and 3) architecture variants in each network6. We report the accuracy of best decision trees and the average accuracy over the cross-validations. In all the tasks, our decision trees achieve 100% accuracy, which demonstrates, once trained, these statistical models can be perfect predictors. (Note that we also visualize our data in an attribute space via PCA analysis in Appendix C). We also identified the four essential attributes across all the classifications: 1) #relus, 2) #merges, 3) #convs, and 4) #apools. Identifying these influential attributes can guide a potential obfuscation-based defensive strategy against such side-channel attacks. 5 DEFENSES TO DEEPRECON ATTACK Previous studies on defenses against cache side-channel attacks (Kong et al., 2013; Zhou et al., 2016) require specific hardware (page-locked cache) or kernel-level features (page coloring). These solutions have not been widely deployed or have remained as optional features because of their impact on computational performance. Hence, we propose framework-level defenses that do not require specialized hardware or kernel-level updates. Our findings in Sec. 4 regarding the essential architecture attributes for an attack, e.g., #relus, #convs, and #merges, guide our search for defenses. As a result, we propose obfuscating the attacker’s observations of these attributes. We show that these defenses significantly reduce the success of our DeepRecon attack, and they can be extended to protect against a more general class cache side-channel attacks against DL frameworks. 5.1 RUNNING DECOY PROCESSES WITH TINY MODELS DeepRecon and other potential cache side-channel attacks on deep learning frameworks can only observe that a library function is called, but not by whom. By running an extra process (i.e., a decoy process) simultaneously with the actual process, we develop a simple but effective defensive strategy. The decoy process also invokes the target functions in the shared framework, which obfuscates the architecture attributes and computation sequences. To minimize the computational overhead, we utilize networks that only have a few layers, referred to as TinyNets. To evaluate our defense, we train these TinyNets at the same time as the victim’s process is running ResNet50, and we measure the number of errors in the extracted attributes. The results are listed in Table 5. We experiment with three TinyNets: 1) only with a Conv. layer (C:1), 2) Conv. with a ReLU layer (C:1, R:1), and 3) two Conv. and ReLU layers with a Merge layer. We found the existence of a decoy process significantly hinders the attacker’s ability to extract correct attributes, causing 1283-2211 errors in the attribute extractions. Contrasted with up to 2.9 errors on average from DeepRecon’s previous extractions (Section 3.4), we find that this defense is 6In the task 3), we consider MobileNet and MobileNetV2 as the same family. Network Arch. Attributes Errors Time #convs #fcs #softms #relus #mpools #apools #merges #biases ResNet50 - 54.5 1 1 48.9 1.1 1 16 49.8 2.9 17.88 ResNet50 + TinyNets C:1 368.75 1.05 1.05 47.05 0.95 1.00 687.75 347.80 1282.40 23.79 C:1 R:1 360.00 1.15 1.15 394.00 1.00 1.00 675.95 350.80 1612.05 23.85 C:2 R:2 M:1 414.55 1.00 1.00 715.10 1.10 1.00 782.25 468.25 2211.25 26.26 Table 5: Effectiveness of the Decoy Process. We compare the 8 attributes extracted from 10 runs, average errors, and average time with and without TinyNets. Note that C refers to the number of convolutional layers, R refers to the number of relu activation layers, and M refers to the number of merge layers. exceedingly effective at curbing this type of reconstruction attack. We also show that we can increase the errors associated with the attributes that we aim to obfuscate. For instance, when we run the TinyNet with only one convolutional layer, we observe the #convs is significantly increased. This is important because, with our defenses, a defender can choose the attributes to obfuscate. Since the defender can control what noise gets introduced, they can also dynamically and adaptively change what noise is added into the attackers observations, thereby increasing our defenses effectiveness and generalizability. To quantify the overhead required of this defense, we measure the average network inference time with and without a decoy process, and we observe that the defense increases the inference time by only 5.91-8.38 seconds per inference. Thus, this defense is a reasonable measure to combat cache side-channel attacks that reconstruct DNNs. 5.2 OBLIVIOUS MODEL COMPUTATIONS Another defense against DeepRecon is to obfuscate the order and number of the computations (i.e., function invocations) observed by our attacker using oblivious model computations. We propose two approaches. First, we can update the victim’s architecture by adding extra layers. To minimize the side-effects such as performance loss, these layers return the output with the same dimensions as the input. For instance, the convolutional layer with kernel size 3, padding size 1, and strides of length 1 can preserve the input dimensions, and the identity block of the ResNets preserves this as well. Thus, we augment the original architecture by adding such layers at the random location to make the same architecture look different in the attacker’s point of view. Prior work (Targ et al., 2016) has shown the unraveled view of the ResNet architecture. Under this view, the skip-connections of a network can be expressed as the ensemble of multiple computational paths that can be computed independently. Hence, we try splitting a computational path with skipconnections into multiple paths without skip-connections. In forward propagation, the multiple paths are randomly chosen and computed so that our attacker finds it difficult to capture the exact architecture. To evaluate our intuition, we construct the obfuscated architecture of ResNet50 (see Appendix B) and extract the attributes by using our attack. Our results are shown in Table 6. Using this defense, the errors detected by DeepRecon increased from 2-3 to 28 for ResNet50. During this test, the first 3 blocks over the entire 16 blocks of ResNet50 are obfuscated. While less effective than our previous defense, we conclude that this defense still can marginally obfuscate the observations of our attacker. Additionally, the gain on computational time is also small: it only increased from 17.88 to 24.03 seconds. 6 CONCLUSION This paper conducts the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. We first define the realistic threat model for these attacks: our attacker does not require the ability to query the victim model; she runs a co-located process on the machine where the victims DL system is running and passively monitors the accesses of target functions in a shared framework. We also present DeepRecon, an attack that reconstructs the architecture of a victim network using the architecture attributes extracted via the Flush+Reload technique. Based on the extracted attributes, we further demonstrate that an attacker can build a meta-model that precisely fingerprints the architecture and family of the pre-trained model in a transfer learning setting. With the meta-model, we identified the essential attributes for these attacks. Finally, we propose and evaluate new framework-level defense techniques that obfuscate our attackers observations. Our empirical security analysis represents a step toward understanding how DNNs are vulnerable to side-channel attacks. D ATTRIBUTES EXTRACTION RESULTS FROM OTHER NETWORKS We show the attributes extraction results with the other 11 networks in Table 9.
1. What is the main contribution of the paper regarding fingerprinting neural network architectures? 2. What are the weaknesses of the paper's approach, particularly in the considered threat model? 3. Do you have any questions regarding the paper's methodology or conclusions? 4. How does the reviewer assess the significance and novelty of the paper's content? 5. Are there any minor comments or suggestions for improvement regarding the paper's presentation or discussion?
Review
Review This paper considers the problem of fingerprinting neural network architectures using cache side channels. In the considered threat model, the attacker runs a process co-located with the victim's, and uses standard FLUSH+RELOAD attacks to infer high-level architectural information such as the number and types of layers of the victim's ML model. The paper concludes with the discussion of some "security-through-obscurity" defenses. I don't quite understand the threat model considered in this paper. The main motivating factor given by the authors for uncovering model architecture details is for facilitating black-box attacks against ML models (e.g., for adversarial examples or membership inference). Yet, in the case of adversarial examples for instance, knowledge of the architecture is often considered a given as keeping it secret has very little influence on attacks. There are black-box attacks that require no knowledge of the architecture and only a few queries (e.g., Black-box Adversarial Attacks with Limited Queries and Information, Ilyas et al., ICML'18). So overall, learning such coarse-grained features about a model just doesn't seem particularly useful, especially since architecture-level details are often not considered private or secret to begin with. After architectural details have been extracted, the end-goal attacks on ML models considered by the authors (e.g., model stealing, adversarial examples, etc.) require query access anyways. Thus, additionally assuming co-location between the adversary and the victim's model seems to unnecessarily strengthen the attacker model. Maybe the most interesting scenario to consider for cache side-channels in ML is when ML models are run on trusted hardware (e.g., Oblivious Multi-Party Machine Learning on Trusted Processors, Ohrimenko et al.; or this work also submitted to ICLR: https://openreview.net/forum?id=rJVorjCcKQ). Cache side channels are much more relevant to that threat model (i.e., ML code running in a trusted hardware enclave hosted by a malicious party). And indeed, there have been many cache side-channel attack papers against trusted hardware such as Intel's SGX (e.g., Software Grand Exposure: SGX Cache Attacks Are Practical, Brasser et al.) But given what we know about the strength of these cache side channel attacks, one would expect to be able to extract much more interesting information about a target model, such as its weights, inputs or outputs. In the above trusted hardware scenario, solely extracting architecture-level information would also not be considered a very strong attack, especially since coarse-grained information (e.g., a rough bound on the number of layers), can be trivially obtained via timing side channels. Minor comments: - In the introduction, you say that white-box attacks for adversarial examples are rendered ineffective by gradient masking. This isn't true in general. Only "weak" white-box attacks can be rendered ineffective this way. So far, there are no examples of models that resist white-box attacks yet are vulnerable to black-box attacks. - What exactly causes the cache-level differences you observe? Can you give some code examples in the paper that showcase what happens? Are the TensorFlow code lines listed in Table 1 from a specific commit or release? - The defenses discussed in Section 5 are all forms of "security through obscurity" that seem easily defeated by a determined attacker that adapts its attack (and maybe uses a few additional observations). --REVISION-- I thank the authors for their rebuttal and clarifications on the threat model and end goals of their attacks. I remain somewhat unconvinced by the usefulness of extracting architectural information. For most of the listed attacks (e.g., building substitute models for adversarial examples, or simply for model extraction) it is not clear from prior work that knowledge of the architecture is really necessary, although it is of course always helpful to have this knowledge. As I mentioned in my review, with current (undefended) ML libraries, it should be possible to extract much more information (e.g., layer weights) using cache side channels.
ICLR
Title Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks Abstract Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine where the victim’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having observed only one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pretrained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding DNNs’ vulnerability to cache side-channel attacks. 1 INTRODUCTION Deep neural networks (DNNs) have become an essential tool in various applications, such as face recognition, speech recognition, malware detection, and autonomous driving or aviation (Parkhi et al., 2015; Amodei et al., 2016; Arp et al., 2014; Chen et al., 2015; Smolyanskiy et al., 2017). A DNN’s performance depends widely on the network architecture—the number and types of layers, how the layers are connected, and the activation functions—and, unfortunately, there is no universal architecture that performs well on all tasks. Consequently, researchers and practitioners have devoted substantial efforts to design various DNN architectures to provide high performance for different learning tasks. Owing to their critical role, DNN architectures represent attractive targets for adversaries who aim to mount DNN fingerprinting attacks. In such an attack, the adversary probes a DNN model, considered confidential, until she infers enough attributes of the network to distinguish it among other candidate architectures. In addition to revealing valuable and secret information to the adversary, DNN fingerprinting can enable further attacks on black-box models. While the prior work on adversarial machine learning often assumes a white-box setting, where the adversary knows the DNN model under attack, these attacks are usually unrealistic in practice (Suciu et al., 2018). In consequence, researchers have started focusing on a black-box setting, where model architecture is unknown to the adversary. However, in this setting, the adversary often makes some assumptions about the victim model in order to craft successful adversarial examples (Papernot et al., 2017). Instead of approxi- mating, the adversary can start by conducting a DNN fingerprinting attack to infer the information required about the model, then use this information to craft adversarial examples that can evade the model. This can also enable model extraction attacks (Tramèr et al., 2016; Kurakin et al., 2016; Wang & Gong, 2018) and membership inference or model inversion attacks (Shokri et al., 2017; Long et al., 2018). Because of the large number and types of architectural attributes, and the subtle effect that each attribute has on the model’s inferences, DNN fingerprinting is challenging when using the typical methods employed in the adversarial machine learning literature. For example, Wang & Gong (2018) propose a hyperparameter stealing attack that requires knowledge of the training dataset, the ML algorithm, and the learned model parameters, yet is unable to extract the model architecture. Wang et al. (2018) demonstrate a fingerprinting attack against transfer learning; however, they rely on the assumption that the teacher model and learning parameters are known to the attacker. To overcome these challenges, recent work has started to investigate attacks that utilize information leaked by architectural side-channels on the hardware where the DNN model runs. Hua et al. (2018) extract the network architecture of a model running on a hardware accelerator by monitoring off-chip memory addresses. Yan et al. (2018) reduce the search space from 1035 to 16 candidates within a given network architecture by exploiting cache side-channels. In this paper, we ask the question: how vulnerable are DNNs to side-channel attacks, and what information do adversaries need for architecture fingerprinting? We perform, to the best of our knowledge, the first security analysis of DNNs operating in the presence of cache side-channel attacks. Specifically, we define the threat model for these attacks, including the adversary’s capabilities and limitations. We then introduce DeepRecon, an efficient attack that reconstructs a black-box DNN architecture by exploiting the Flush+Reload (Yarom & Falkner, 2014) technique, and we further evaluate the importance of specific architectural attributes in the success of fingerprinting. Finally, we propose and evaluate new framework-level defenses against these attacks. Our attack works by targeting lines of code corresponding to the execution of specific network architecture attributes of a deep learning (DL) framework. Specifically, these lines of code correspond to instructions to execute functions that are mapped into the instruction cache when the functions are invoked. Once these lines of code are identified, our attack flushes them from the instruction cache shared by the attacker and the victim. The attacker waits for the victim’s process to run and then measures the time it takes to re-access those same lines of code. If the victim’s DNN model has accessed any of these particular functions, the corresponding lines of code will be present in the instruction cache when the attacker tries to re-access them. Therefore, the access time to call these functions will be measurably faster than if the victim had not loaded them back into the shared instruction cache. On the other hand, if the victim DNN model did not access these particular functions, the corresponding lines will not be present in the cache when accessed by the attacker, and thus the access time will be measurably slower. We show that from this seemingly small amount of information that is leaked to the attacker, much of the victim’s DNN architecture can be extracted with no query access required. To launch this attack, we only assume that: 1) an attacker and a victim are co-located in the same machine, and 2) they use the same shared DL framework. In evaluations, we demonstrate that, by learning whether or not specific functions were invoked during inference, we can extract 8 architecture attributes across 13 neural network architectures with high accuracy. Based on the extracted attributes, we demonstrate how an attacker can reconstruct the architectures of two common networks, VGG16 (Simonyan & Zisserman, 2014) and ResNet50 (He et al., 2016) as proof of concept. We also demonstrate a useful example of DeepRecon through model fingerprinting in a transfer learning attack. Finally, we propose countermeasures to obfuscate an attacker from extracting the correct attributes and sequences using observation attacks like DeepRecon and show that these defenses significantly increase the errors in the extracted attributes and can be implemented in various DL frameworks without hardware or operating system support. 2 BACKGROUND As opposed to attacks that exploit vulnerabilities in software or algorithm implementations, sidechannel attacks utilize information leaks from vulnerabilities in the implementation of computer systems. Due to modern micro-processor architecture that shares the last-level cache (L3 cache) between CPU cores, cache side-channel attacks have become more readily available to implement. Since the cache is involved in almost all the memory access activities on a machine, it can be a medium that includes abundant information about programs running on the host. The fundamental idea of the attack is to monitor the access time to the shared contents, e.g., shared libraries or credentials, between a victim and an attacker while the attacker fills the cache set with the addresses known to her (Prime+Probe (Liu et al., 2015)) or keeps flushing the shared data from the cache (Flush+Reload (Yarom & Falkner, 2014)). In both the cases, once the victim accesses memory or shared data, the attacker can identify which memory addresses or shared data is accessed. Prior work has demonstrated that, with cache side-channels, an attacker can construct covert channels between processes, stealing cryptographic keys, or breaking the isolation between virtual machines (Zhang et al., 2014; Liu et al., 2015). FLUSH+RELOAD Our attack leverages the Flush+Reload technique, which monitors accesses to memory addresses in shared contents. The technique assumes that an attacker can run a spy process on the same host machine. This enables the attacker to monitor the shared data or libraries between her and the victim. During monitoring, the attacker repeatedly calls the clflush assembly instruction to evict the L3 cache lines storing shared content and continually measures the time to reload the content. A fast reload time indicates the data was loaded into the cache by the victim whereas a slow reload time means the data is not used. From this information, the attacker determines what data is currently in use and identifies the control flow (order of function calls) of the victim’s process. We chose Flush+Reload over Prime+Probe since the results from Flush+Reload produce less noise. ATTACKS ON BLACK-BOX DEEP NEURAL NETWORKS Prior work has proposed various methods to attack black-box DNNs. Tramèr et al. (2016) and Papernot et al. (2017) demonstrated model extraction attacks on black-box DNNs that aim to learn a substitute model by using the data available to the attacker and observing the query results. Fredrikson et al. (2015) and Shokri et al. (2017) demonstrated model inversion attacks on black-box DNNs that reveal a user’s private information in the training data leveraging model predictions. Wang & Gong (2018) proposed a hyper-parameter stealing attack that aims to estimate the hyper-parameter values used to train a victim model. However, these attacks require unrealistic conditions, e.g., the architecture of the victim network needs to be known to attackers, or the victim uses a network with simple structures, such as multi-layer perceptrons. Thus, the capability of DeepRecon attack that reconstructs black-box DNN architectures can bridge the gap between the realistic black-box scenario and their conditions. RECONSTRUCTING BLACK-BOX DNNS VIA SIDE-CHANNELS Recent studies have discovered various methods to extract the architecture of a black-box DNN. Memory and Timing Side-Channels: Hua et al. (2018) monitored off-chip memory accesses to extract the network architecture of a victim model running on a hardware accelerator. They estimated the possible architecture configurations and extracted model parameters. However, the attack requires physical accesses to the hardware, whereas our attack does not. Power Side-Channel: Wei et al. (2018) demonstrated that an attacker can recover an input image from collected power traces without knowing the detailed parameters in the victim network. However, this approach also assumed an attacker who knows the architecture of a victim network, so our attack could help meet the assumptions of this attack as well. Cache Side-Channel: Concurrent work by Yan et al. (2018) demonstrates that an attacker can reveal the architecture details by reverse engineering and attacking generalized matrix multiply (GeMM) libraries. However, GeMM-based reverse engineering can only reveal the number of parameters of convolutional or fully connected layers because others such as activation and pooling layers are difficult to characterize by matrix multiplications. Also, in order for the monitored functions in GeMM libraries to be in a shared instruction cache of an attacker and a victim, the multiplications must occur on the CPU. However, DeepRecon can be performed independent of the hardware on which the computations occur, generalizing better common hardware on which DNNs run (e.g., GPUs). Using Known Student Models: Wang et al. (2018) proposed a transfer learning technique in which an attacker identifies teacher models by using known student models available from the Internet. This approach assumed that the victim selects the teacher from a set of known architectures. We, however, take this a step further and fingerprint families of architectures as well as many commonly known teacher models. Additionally, we are able to reconstruct arbitrary teacher model architectures with high accuracy. Meta-Models: Oh et al. (2018) demonstrated that an attacker can estimate the victim’s architecture by using a brute-force approach and meta-models. They first trained all the possible architectures of a given set and pruned the models with inferior performance. Then, they trained a meta-model that identified the network architecture using mutated samples and labels. However, the pruning process is time intensive (i.e., 40 GPU days for 10k candidates of LeNet (LeCun, 1998)), and the candidates were selected from limited architectural choices, whereas we again go a step further in identifying families of architectures and can generalize to previously unknown teacher models. 3 DEEPRECON ATTACK 3.1 THREAT MODEL Our threat model requires an attacker who can launch a co-located user-level process on the same host machine as the victim. This ensures the attacker and the victim’s process share the same instruction cache. This co-location also allows our attacker to observe the victim DNN’s behavior without actively querying the model, avoiding the common assumption of query access in the literature on black-box attacks. Consider the example of any computer that an attacker has access to at a user-level, the attacker can log into this machine and attack other users with DeepRecon. Another way for an attacker to achieve co-location is to disguise her process as a benign program such as an extension for a browser. Once some victims install the extension in their browser, the attacker can easily launch a monitoring process. We also assume that the attacker and victim use the same opensource DL frameworks shared across users. Importantly, this assumption can be easily met because many popular DL frameworks such as Tensorflow (Abadi et al., 2016) or PyTorch1 are provided as open-source libraries, and this practice of sharing libraries across users is default on major operating systems, e.g., Windows, MacOS, and Ubuntu. Thus, our attacker can identify the addresses of functions to monitor in the instruction cache by reverse-engineering the shared framework’s code. Motivating Attack Example: We provide a practical example where our threat model is applicable. Suppose an attacker aims to install malware on a victim’s machine where an anti-virus system, based on a DNN model, is running. To evade malware detection in common black-box attacks such as the attack proposed in Ilyas et al. (2018), an attacker needs to drop crafted programs actively to monitor the model’s decisions and synthesize an evasive sample based on the collected data. However, when the attacker drops multiple files, her behavior can be detected by the victim. This is further amplified by the need to query the model repeatedly to craft any more malicious files. On the other hand, our attacker induces the victim to install a chrome add-on (which runs at a userlevel) that passively monitors cache behaviors of the model and extracts the architecture. Then, the attacker trains a surrogate model with public datasets (including malware and benign software). With the surrogate model, the attacker crafts her malware that evades detection and can continue to craft malicious files that will be classified as benign offline and without any further observations. As opposed to common black box attacks, our attacker lowers the possibility of being caught because she only monitors the victim model while it is in use and does not need to query the model. 3.2 ATTACK OVERVIEW The overview of DeepRecon attack is described in Fig. 1. The victim’s behaviors are depicted with the dotted lines (black), and the attacker’s actions are described with the solid lines (red). While preparing the attack offline, the attacker first analyzes the deep learning framework that the victim uses and collects the target functions corresponding to the architecture attributes that the attacker wants (Table 1). Then later, the attacker launches a co-located process at the user-level that runs along with the victim’s process on the same host machine. When the victim’s process runs training 1https://pytorch.org or predictions with its model, the target functions are invoked and the instructions that call them are loaded into the shared instruction cache. The attacker periodically flushes the cache lines and measures the access time to the target instructions. If the victim invokes any of the target functions after flushing, the following access time measured by the attacker will be measurably faster than if the victim does not invoke them. The attacker collects the number and sequence of invocations and then extracts the victim model’s architecture attributes. Then, the attacker reconstructs the victim model’s architecture. 3.3 REVERSE ENGINEERING In Table 1, we analyze the TensorFlow v1.9.0-rc0 framework2 and list the target functions corresponding to the architecture attributes. We choose TensorFlow due to its popularity as an open source machine learning (ML) framework, but believe that the methods we describe will be applicable to most, if not all, other popular frameworks. In addition to having found some corresponding functions in another popular framework, PyTorch/Caffe2, our attack leverages the inherent structure of a scalable and widely deployable ML framework, namely library layer abstraction. All the of the functions we monitor in TensorFlow are in the core of the library, both below the API interface and above the system dependent code. Because of this, our attack not only does not depend on the specific TensorFlow API a victim uses but also is agnostic to the type of processing hardware the victim is using, from a single CPU to a cluster of GPUs. The specific functions we monitor in Table 1 represent two subgroups: those corresponding to control flow and those corresponding to architecture attributes. The control flow functions allow us to observe the number of queries the victim makes to the model and the number of layers that are updated by gradient decent if we observe the model when it is being trained The function that monitors the number of queries is especially important, as it allows us to separate individual observations. The architecture attribute functions are called once per instance of an architecture attribute being present in the neural network, allowing us to see the number of each attribute and the sequence in which they occur in the victim’s architecture. Combined, these functions allow us to observe the architecture attributes of a neural network from start to finish on a given observation. Additionally, the bias operator gradient function, given in the table by #grads, can allow an attacker to figure out the total number of layers that are updated during training time if the attacker observes 2https://github.com/tensorflow/tensorflow/releases/tag/v1.9.0-rc0 Type Code Stage Func. Name Location in TensorFlow Code Control Flow #queries T/I RunCallable() core/common runtime/session ref.cc [line: 154] #grads T compute() core/kernels/bias op.cc [line: 218] Arch. Attributes #convs T/I operator() core/kernels/conv ops.cc [line: 122] #fcs T/I compute() core/kernel/matmul op.cc [line 451] #softms T/I compute() core/kernels/cwise ops common.h [line: 240] #relus T/I compute() core/framework/numeric op.h [line: 58] #mpools T/I compute() core/kernels/pooling ops common.h [line: 109] #apools T/I compute() core/kernels/avgpooling op.cc [line: 76] #merges T/I compute() core/kernels/cwise ops common.h [line: 91] #biases T/I compute() core/kernels/bias op.cc [line: 98] (Note that T stands for the training, and I indicates the inference.) Table 1: Target Functions. The monitored functions in the TensorFlow framework (v1.9.0-rc0). Each function corresponds to a control flow or an attribute. [Note that the codes are the number of queries (#queries), gradient updates (#grads), convolutional layers (#convs), fully connected layers (#fcs), softmaxs (#softms), ReLUs (#relus), max poolings (#mpools), avg. poolings (#apools), merge operations (#merges), bias operations (#biases).] the training of the model. Using this information, the attacker, already knowing the total number of layers in the architecture, can find the point at which the victim is freezing the backpropagation. This allows the attacker to know which layers are directly inherited from the training model and which layers are specifically trained by the victim. The relevance of this will be discussed in our application of DeepRecon to model fingerprinting (Sec. 4). Limitations. Similar to concurrent work (Yan et al., 2018), we are also able to extract additional information, such as the number of parameters in convolutional and fully connected layers by monitoring the matrix multiplications in the Eigen library3 on which TensorFlow is built. This attack provides more fine-grained information, but it does not generalize to computations on hardware other than a CPU. Also, we examine whether our attack can recover the inputs to the model and its parameters. By varying inputs and parameters while monitoring the functions used to compute these parameters using a code coverage tool, GCOV4, we find that the framework implements matrix multiplications of parameters in a data-independent way. Thus, we are unable to estimate the inputs and parameters of a victim model. We hypothesize that this is a general limit of cache based side-channel attacks on DNNs that target instructions, and that obtaining the parameters is reducible to the problem of reading arbitrary victim memory. 3.4 EXTRACTING ARCHITECTURE ATTRIBUTES We run our attack on Ubuntu 16.04 running on a host machine equipped with the i7-4600M processor (8 cores and 4MB L3 cache). Our victim and attacker processes are running at the user-level on the same operating system (OS). Both the processes utilize the TensorFlow v1.9.0-rc0 framework. The victim uses the DNN model to make predictions, and the attacker launches the Flush+Reload attack using the Mastik toolkit (Yarom, 2016) to monitor the target functions at the same time. A total of 13 convolutional neural network (CNN) architectures are considered in our experiment: DenseNet121, 169, 201 (Huang et al., 2017), VGG16, 19 (Simonyan & Zisserman, 2014), ResNet50, 101, 152 (He et al., 2016), InceptionV3, InceptionResNet (Szegedy et al., 2015), Xception (Chollet, 2017), MobileNetV1, and MobileNetV25 (Howard et al., 2017). Table 2 includes the extraction results from monitoring VGG16 and Resnet50. The full extraction results from the 13 networks are in Appendix D. We first show the results from a Short attack, where an attacker can only run her process on a short interval of time, observing only a single query of the 3http://eigen.tuxfamily.org/index.php 4https://gcc.gnu.org/onlinedocs/gcc/Gcov.html 5Note that we use alpha = 1.0 for both. network. We randomly choose ten individual queries and average the attributes. We report errors as the sum of absolute deviations from ground truths. In VGG19, our attacker has 2.6 errors on average and 3.1 in ResNet50. We also show the extraction results from 10 continuous observations (L), in which the attacker runs her process for a more extended period of time. The error rates in both the networks are similar. These results demonstrate that DeepRecon achieves better accuracy by only observing a running network than prior work that assumes query access (Oh et al., 2018). 3.5 RECONSTRUCTING THE ARCHITECTURE OF BLACK-BOX DEEP NEURAL NETWORKS Based on the extracted architecture information, DeepRecon reconstructs the entire DNN architecture of the victim model. In these examples, we focus on the fact that most CNN architectures consist of the feature extractor and classifier layers. The feature extractor is located in the earlier layers and is the combination of basic building blocks. The classifier is a set of fully connected layers at the end of the network. In VGGs and ResNets, there are standard blocks used in each CNN architecture as we depict in Fig. 2. Each block includes activation layers with preceding convolutional layers. In the classifier layers, each fully connected layer is followed by an activation layer. We describe the reconstruction process of ResNet50 in Table 3. (Note that we also reconstructed VGG16 without errors and show the result in Appendix A.) In this table, we compare the computation sequences observed by the attacker with the actual computations in ResNet50. We can see the sequences are accurately captured with a few errors. The three steps at the bottom describe the reconstruction processes of our attacker. Our attacker first identifies (1) the number of blocks by counting the (max-)pooling layers. Once the attacker separates blocks with the pooling layer locations, she counts (2) the number of convolutional layers in each block. In ResNets, we know that the Residual block has four convolutional layers, and the Identity block has three convolutional layers in each. Thus, the attacker can identify the type of each block. After that, the attacker estimates (3) the number of fully connected layers at the end. Finally, with this block-level information, our attacker successfully estimates the victim architecture is the ResNet50 with high accuracy. Discussion about the Reconstruction Errors. We also examine whether the errors in our experiments have specific patterns, allowing our attacker to filter them out. However, we could not find any pattern: the types and locations of error attributes are different in each run (over 10 runs). Thus, Arch. Data Computations Sequences (Layers in the Ground Truth) ResNet50 G CR PM CR CR C C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR PA FSo S CR PM C CR CR C MR CR CR C MR CR CR C MR C CR CR C MR CR CR ... MR CR CR C MR CR CR C MR C CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR R CR C MR C CR CR C MR CR CR C MR CR CR C MR PA FSo Recon. Steps Details ResNet50 Recon. (1) Block 1. Block 2. Block 3 Block 4. Block 5. Block 6. Block 7. Block 8. Block 9. Block 10. Block 11. Block 12. Block 13. Block 14. Block 15. Block 16. Block 17. Block 18. (2) Input Residual Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Fully Connecteds (3) ResNet 50: Configuration with 50-Layers (Note that C,P ,F ,M indicate the Convolutional, Pooling, Fully connected, Merge layers, and the subscripts mean the activations (R: ReLU and So: Softmax).) Table 3: Reconstruction Process of ResNet50 Architecture. We list the computation sequences captured by our attack in the above rows and the reconstruction process at the bottom rows. The errors in capturing the correct computation sequences by our attacker are marked as bold and red. we attribute these errors to two primary causes. First, there can be background noise from other processes that our Flush+Reload attack picks up, e.g., a process can pull data into the L3 cache and evict the target function between when the victim calls the function and we reload it. In this case, our attacker cannot observe the victim calling the function. Second, our attack can experience common errors associated with the Flush+Reload attack (Yarom & Falkner, 2014), e.g., a victim invokes the target function when we reload, causing our attacker to see a cache miss instead of correctly observing a cache hit. 4 FINGERPRINTING BLACK-BOX NEURAL NETWORKS Our attacker identifies victim neural network architectures using statistical models trained on the attributes as features and the architectures as labels. This is a powerful capability to have if our attacker aims to fingerprint the architecture of the pre-trained models used in transfer learning. Transfer learning is typically done in a fine-tuned manner: a user creates a student model that uses the architecture and parameters of a teacher model and trains only a few fully connected layers at the end of the network by freezing the backpropagation of the preceding layers. We also found that our attacker can learn the layers in the student model not updated during training when they observe the training process (Sec. 3.3). Thus, our attacker can extract both the network architecture and frozen layers of the student (victim) model. Once our attacker identifies the victim network’s teacher model with this information, the attacker can utilize several pre-trained models available from the Internet to perform further attacks. For instance, an attacker can increase the success rate of black-box attacks by crafting adversarial samples with the internal representation from the teacher models (Wang et al., 2018). Additionally, since adversarial samples transfer across different models for the same task (Tramèr et al., 2017), the attacker is not required to use the exact same teacher model—i.e., she can use any pre-trained model of an architecture family that achieves similar accuracy on a task. An attacker can also perform model extractions easily. This is because the model parameters from the pre-trained models can be readily found on the Internet as well and are used in a victim model in this setting. Finally, if the attacker has a partial knowledge of the victim’s training data and can gain the knowledge of which layers were frozen during training (see Sec. 3.3), she can fully estimate the parameters of the entire network by independently training the last few layers that were not frozen. To evaluate our fingerprinting attack, we train decision tree classifiers on the 13 networks used in Sec. 3.4 to identify the network architectures using the extracted attributes and labels. We extract the attributes over 50 observations of each network (650 in total) and utilize 5-fold cross-validations. We Task Networks Acc. [Avg.] Important Attributes Total - 1.0 [0.9046] #relus [0.2575] #merges [0.2534] #convs [0.2497] #biases [0.1034] Family - 1.0 [0.9938] #relus [0.4621] #convs [0.4421] #mpools [0.3382] #apools [0.2752] Arch. Variants V 1.0 [0.9867] #relus [0.6982] #convs [0.6982] #biases [0.6898] - R 1.0 [0.9900] #relus [0.6399] #merges [0.6399] #convs [0.6399] #biases [0.3750] D 1.0 [0.9867] #relus [0.6399] #merges [0.6399] #convs [0.6100] - I 1.0 [1.0000] #convs [0.6399] #merges [0.6399] #apools [0.5875] #biases [0.3373] M 1.0 [1.0000] #relus [0.6982] #convs [0.6982] #fcs [0.6595] #softms [0.6228] (Note that V, R, D, I, and M indicate VGGs, ResNets, DenseNets, InceptionNets, and MobileNets.) Table 4: Fingerprinting Performance and Important Attributes. Each row corresponds to each task. We list the accuracy of the best classifiers and the essential attributes based on the MI scores, denoted by the numbers in brackets. measure the classification accuracy and analyze the four most essential attributes based on mutual information (MI) scores. Since the attributes are not affected by the host machines or operating systems, the attacker can train the models offline for use in attacks. Table 4 shows the results of fingerprinting the neural networks. We conduct three types of classification tasks with the aim of identifying 1) the entire 13 networks, 2) 5 network families, and 3) architecture variants in each network6. We report the accuracy of best decision trees and the average accuracy over the cross-validations. In all the tasks, our decision trees achieve 100% accuracy, which demonstrates, once trained, these statistical models can be perfect predictors. (Note that we also visualize our data in an attribute space via PCA analysis in Appendix C). We also identified the four essential attributes across all the classifications: 1) #relus, 2) #merges, 3) #convs, and 4) #apools. Identifying these influential attributes can guide a potential obfuscation-based defensive strategy against such side-channel attacks. 5 DEFENSES TO DEEPRECON ATTACK Previous studies on defenses against cache side-channel attacks (Kong et al., 2013; Zhou et al., 2016) require specific hardware (page-locked cache) or kernel-level features (page coloring). These solutions have not been widely deployed or have remained as optional features because of their impact on computational performance. Hence, we propose framework-level defenses that do not require specialized hardware or kernel-level updates. Our findings in Sec. 4 regarding the essential architecture attributes for an attack, e.g., #relus, #convs, and #merges, guide our search for defenses. As a result, we propose obfuscating the attacker’s observations of these attributes. We show that these defenses significantly reduce the success of our DeepRecon attack, and they can be extended to protect against a more general class cache side-channel attacks against DL frameworks. 5.1 RUNNING DECOY PROCESSES WITH TINY MODELS DeepRecon and other potential cache side-channel attacks on deep learning frameworks can only observe that a library function is called, but not by whom. By running an extra process (i.e., a decoy process) simultaneously with the actual process, we develop a simple but effective defensive strategy. The decoy process also invokes the target functions in the shared framework, which obfuscates the architecture attributes and computation sequences. To minimize the computational overhead, we utilize networks that only have a few layers, referred to as TinyNets. To evaluate our defense, we train these TinyNets at the same time as the victim’s process is running ResNet50, and we measure the number of errors in the extracted attributes. The results are listed in Table 5. We experiment with three TinyNets: 1) only with a Conv. layer (C:1), 2) Conv. with a ReLU layer (C:1, R:1), and 3) two Conv. and ReLU layers with a Merge layer. We found the existence of a decoy process significantly hinders the attacker’s ability to extract correct attributes, causing 1283-2211 errors in the attribute extractions. Contrasted with up to 2.9 errors on average from DeepRecon’s previous extractions (Section 3.4), we find that this defense is 6In the task 3), we consider MobileNet and MobileNetV2 as the same family. Network Arch. Attributes Errors Time #convs #fcs #softms #relus #mpools #apools #merges #biases ResNet50 - 54.5 1 1 48.9 1.1 1 16 49.8 2.9 17.88 ResNet50 + TinyNets C:1 368.75 1.05 1.05 47.05 0.95 1.00 687.75 347.80 1282.40 23.79 C:1 R:1 360.00 1.15 1.15 394.00 1.00 1.00 675.95 350.80 1612.05 23.85 C:2 R:2 M:1 414.55 1.00 1.00 715.10 1.10 1.00 782.25 468.25 2211.25 26.26 Table 5: Effectiveness of the Decoy Process. We compare the 8 attributes extracted from 10 runs, average errors, and average time with and without TinyNets. Note that C refers to the number of convolutional layers, R refers to the number of relu activation layers, and M refers to the number of merge layers. exceedingly effective at curbing this type of reconstruction attack. We also show that we can increase the errors associated with the attributes that we aim to obfuscate. For instance, when we run the TinyNet with only one convolutional layer, we observe the #convs is significantly increased. This is important because, with our defenses, a defender can choose the attributes to obfuscate. Since the defender can control what noise gets introduced, they can also dynamically and adaptively change what noise is added into the attackers observations, thereby increasing our defenses effectiveness and generalizability. To quantify the overhead required of this defense, we measure the average network inference time with and without a decoy process, and we observe that the defense increases the inference time by only 5.91-8.38 seconds per inference. Thus, this defense is a reasonable measure to combat cache side-channel attacks that reconstruct DNNs. 5.2 OBLIVIOUS MODEL COMPUTATIONS Another defense against DeepRecon is to obfuscate the order and number of the computations (i.e., function invocations) observed by our attacker using oblivious model computations. We propose two approaches. First, we can update the victim’s architecture by adding extra layers. To minimize the side-effects such as performance loss, these layers return the output with the same dimensions as the input. For instance, the convolutional layer with kernel size 3, padding size 1, and strides of length 1 can preserve the input dimensions, and the identity block of the ResNets preserves this as well. Thus, we augment the original architecture by adding such layers at the random location to make the same architecture look different in the attacker’s point of view. Prior work (Targ et al., 2016) has shown the unraveled view of the ResNet architecture. Under this view, the skip-connections of a network can be expressed as the ensemble of multiple computational paths that can be computed independently. Hence, we try splitting a computational path with skipconnections into multiple paths without skip-connections. In forward propagation, the multiple paths are randomly chosen and computed so that our attacker finds it difficult to capture the exact architecture. To evaluate our intuition, we construct the obfuscated architecture of ResNet50 (see Appendix B) and extract the attributes by using our attack. Our results are shown in Table 6. Using this defense, the errors detected by DeepRecon increased from 2-3 to 28 for ResNet50. During this test, the first 3 blocks over the entire 16 blocks of ResNet50 are obfuscated. While less effective than our previous defense, we conclude that this defense still can marginally obfuscate the observations of our attacker. Additionally, the gain on computational time is also small: it only increased from 17.88 to 24.03 seconds. 6 CONCLUSION This paper conducts the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. We first define the realistic threat model for these attacks: our attacker does not require the ability to query the victim model; she runs a co-located process on the machine where the victims DL system is running and passively monitors the accesses of target functions in a shared framework. We also present DeepRecon, an attack that reconstructs the architecture of a victim network using the architecture attributes extracted via the Flush+Reload technique. Based on the extracted attributes, we further demonstrate that an attacker can build a meta-model that precisely fingerprints the architecture and family of the pre-trained model in a transfer learning setting. With the meta-model, we identified the essential attributes for these attacks. Finally, we propose and evaluate new framework-level defense techniques that obfuscate our attackers observations. Our empirical security analysis represents a step toward understanding how DNNs are vulnerable to side-channel attacks. D ATTRIBUTES EXTRACTION RESULTS FROM OTHER NETWORKS We show the attributes extraction results with the other 11 networks in Table 9.
1. What is the focus of the paper regarding cache side-channel attacks? 2. What are the strengths of the paper, particularly in its threat model and evaluation? 3. Do you have any concerns or questions about the setup and methodology used in the paper? 4. How does the reviewer assess the novelty and contribution of the paper compared to prior works? 5. Are there any specific aspects of the paper that require further explanation or discussion, such as error patterns or comparisons with other works?
Review
Review This paper performs cache side-channel attacks to extract attributes of a victim model, and infer its architecture accordingly. In their threat model, the attacker could launch a co-located process on the same host machine, and use the same DL framework as the victim model. Their evaluation shows that: (1) their attacks can extract the model attributes pretty well, including the number of different types of layers; (2) using these attributes, they train a decision tree classifier among 13 CNN architectures, and show that they can achieve a nearly perfect classification accuracy. They also evaluate some defense strategies against their attacks. Model extraction attack under a black-box setting is an important topic, and I am convinced that their threat model is a good step towards real-world attacks. As for the novelty, although Yan et al. also evaluate cache side-channel attacks, that paper was released pretty shortly before ICLR deadline, thus I would consider this work as an independent contribution at its submission. I have several questions and comments about this paper: - One difference of the evaluation setup between this paper and Yan et al. is that in Yan et al., they are trying to infer more detailed hyper-parameters of the architecture (e.g., the number of neurons, the dimensions of each layer, the connections), but within a family of architectures (i.e., VGG or ResNet). On the other hand, in this paper, the authors extract higher-level attributes such as the number of different layers and activation functions, and predict the model family (from 5 options) or the concrete model architecture (from 13 options). While I think inferring the model family type is also an interesting problem, this setup is still a little contrived. Would the classifier predict the family of a model correctly if it is not included in the training set, say, could it predict ResNet32 as R (ResNet)? - In Table 3, it looks like the errors in the captured computation sequences show some patterns. Are these error types consistent across different runs? Could you provide some explanation of these errors? - In Table 5, my understanding is that we need to compare the avg errors to the numbers in Table 2. In this case, the errors seem to be even larger than the sum of the attribute values. Is this observation correct? If so, could you discuss what attributes are most wrongly captured, and show some examples? - It would be beneficial to provide a more detailed comparison between this work and Yan et al., e.g., whether the technique proposed in this work could be also extended to infer more fine-grained attributes of a model, and go beyond a classification among a pre-defined set of architectures. - The paper needs some editing to fix some typos. For example, in Table 5, the captions of Time (Baseline) and Time (+TinyNet) should be changed, and it looks confusing at the first glance.
ICLR
Title Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks Abstract Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine where the victim’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having observed only one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pretrained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding DNNs’ vulnerability to cache side-channel attacks. 1 INTRODUCTION Deep neural networks (DNNs) have become an essential tool in various applications, such as face recognition, speech recognition, malware detection, and autonomous driving or aviation (Parkhi et al., 2015; Amodei et al., 2016; Arp et al., 2014; Chen et al., 2015; Smolyanskiy et al., 2017). A DNN’s performance depends widely on the network architecture—the number and types of layers, how the layers are connected, and the activation functions—and, unfortunately, there is no universal architecture that performs well on all tasks. Consequently, researchers and practitioners have devoted substantial efforts to design various DNN architectures to provide high performance for different learning tasks. Owing to their critical role, DNN architectures represent attractive targets for adversaries who aim to mount DNN fingerprinting attacks. In such an attack, the adversary probes a DNN model, considered confidential, until she infers enough attributes of the network to distinguish it among other candidate architectures. In addition to revealing valuable and secret information to the adversary, DNN fingerprinting can enable further attacks on black-box models. While the prior work on adversarial machine learning often assumes a white-box setting, where the adversary knows the DNN model under attack, these attacks are usually unrealistic in practice (Suciu et al., 2018). In consequence, researchers have started focusing on a black-box setting, where model architecture is unknown to the adversary. However, in this setting, the adversary often makes some assumptions about the victim model in order to craft successful adversarial examples (Papernot et al., 2017). Instead of approxi- mating, the adversary can start by conducting a DNN fingerprinting attack to infer the information required about the model, then use this information to craft adversarial examples that can evade the model. This can also enable model extraction attacks (Tramèr et al., 2016; Kurakin et al., 2016; Wang & Gong, 2018) and membership inference or model inversion attacks (Shokri et al., 2017; Long et al., 2018). Because of the large number and types of architectural attributes, and the subtle effect that each attribute has on the model’s inferences, DNN fingerprinting is challenging when using the typical methods employed in the adversarial machine learning literature. For example, Wang & Gong (2018) propose a hyperparameter stealing attack that requires knowledge of the training dataset, the ML algorithm, and the learned model parameters, yet is unable to extract the model architecture. Wang et al. (2018) demonstrate a fingerprinting attack against transfer learning; however, they rely on the assumption that the teacher model and learning parameters are known to the attacker. To overcome these challenges, recent work has started to investigate attacks that utilize information leaked by architectural side-channels on the hardware where the DNN model runs. Hua et al. (2018) extract the network architecture of a model running on a hardware accelerator by monitoring off-chip memory addresses. Yan et al. (2018) reduce the search space from 1035 to 16 candidates within a given network architecture by exploiting cache side-channels. In this paper, we ask the question: how vulnerable are DNNs to side-channel attacks, and what information do adversaries need for architecture fingerprinting? We perform, to the best of our knowledge, the first security analysis of DNNs operating in the presence of cache side-channel attacks. Specifically, we define the threat model for these attacks, including the adversary’s capabilities and limitations. We then introduce DeepRecon, an efficient attack that reconstructs a black-box DNN architecture by exploiting the Flush+Reload (Yarom & Falkner, 2014) technique, and we further evaluate the importance of specific architectural attributes in the success of fingerprinting. Finally, we propose and evaluate new framework-level defenses against these attacks. Our attack works by targeting lines of code corresponding to the execution of specific network architecture attributes of a deep learning (DL) framework. Specifically, these lines of code correspond to instructions to execute functions that are mapped into the instruction cache when the functions are invoked. Once these lines of code are identified, our attack flushes them from the instruction cache shared by the attacker and the victim. The attacker waits for the victim’s process to run and then measures the time it takes to re-access those same lines of code. If the victim’s DNN model has accessed any of these particular functions, the corresponding lines of code will be present in the instruction cache when the attacker tries to re-access them. Therefore, the access time to call these functions will be measurably faster than if the victim had not loaded them back into the shared instruction cache. On the other hand, if the victim DNN model did not access these particular functions, the corresponding lines will not be present in the cache when accessed by the attacker, and thus the access time will be measurably slower. We show that from this seemingly small amount of information that is leaked to the attacker, much of the victim’s DNN architecture can be extracted with no query access required. To launch this attack, we only assume that: 1) an attacker and a victim are co-located in the same machine, and 2) they use the same shared DL framework. In evaluations, we demonstrate that, by learning whether or not specific functions were invoked during inference, we can extract 8 architecture attributes across 13 neural network architectures with high accuracy. Based on the extracted attributes, we demonstrate how an attacker can reconstruct the architectures of two common networks, VGG16 (Simonyan & Zisserman, 2014) and ResNet50 (He et al., 2016) as proof of concept. We also demonstrate a useful example of DeepRecon through model fingerprinting in a transfer learning attack. Finally, we propose countermeasures to obfuscate an attacker from extracting the correct attributes and sequences using observation attacks like DeepRecon and show that these defenses significantly increase the errors in the extracted attributes and can be implemented in various DL frameworks without hardware or operating system support. 2 BACKGROUND As opposed to attacks that exploit vulnerabilities in software or algorithm implementations, sidechannel attacks utilize information leaks from vulnerabilities in the implementation of computer systems. Due to modern micro-processor architecture that shares the last-level cache (L3 cache) between CPU cores, cache side-channel attacks have become more readily available to implement. Since the cache is involved in almost all the memory access activities on a machine, it can be a medium that includes abundant information about programs running on the host. The fundamental idea of the attack is to monitor the access time to the shared contents, e.g., shared libraries or credentials, between a victim and an attacker while the attacker fills the cache set with the addresses known to her (Prime+Probe (Liu et al., 2015)) or keeps flushing the shared data from the cache (Flush+Reload (Yarom & Falkner, 2014)). In both the cases, once the victim accesses memory or shared data, the attacker can identify which memory addresses or shared data is accessed. Prior work has demonstrated that, with cache side-channels, an attacker can construct covert channels between processes, stealing cryptographic keys, or breaking the isolation between virtual machines (Zhang et al., 2014; Liu et al., 2015). FLUSH+RELOAD Our attack leverages the Flush+Reload technique, which monitors accesses to memory addresses in shared contents. The technique assumes that an attacker can run a spy process on the same host machine. This enables the attacker to monitor the shared data or libraries between her and the victim. During monitoring, the attacker repeatedly calls the clflush assembly instruction to evict the L3 cache lines storing shared content and continually measures the time to reload the content. A fast reload time indicates the data was loaded into the cache by the victim whereas a slow reload time means the data is not used. From this information, the attacker determines what data is currently in use and identifies the control flow (order of function calls) of the victim’s process. We chose Flush+Reload over Prime+Probe since the results from Flush+Reload produce less noise. ATTACKS ON BLACK-BOX DEEP NEURAL NETWORKS Prior work has proposed various methods to attack black-box DNNs. Tramèr et al. (2016) and Papernot et al. (2017) demonstrated model extraction attacks on black-box DNNs that aim to learn a substitute model by using the data available to the attacker and observing the query results. Fredrikson et al. (2015) and Shokri et al. (2017) demonstrated model inversion attacks on black-box DNNs that reveal a user’s private information in the training data leveraging model predictions. Wang & Gong (2018) proposed a hyper-parameter stealing attack that aims to estimate the hyper-parameter values used to train a victim model. However, these attacks require unrealistic conditions, e.g., the architecture of the victim network needs to be known to attackers, or the victim uses a network with simple structures, such as multi-layer perceptrons. Thus, the capability of DeepRecon attack that reconstructs black-box DNN architectures can bridge the gap between the realistic black-box scenario and their conditions. RECONSTRUCTING BLACK-BOX DNNS VIA SIDE-CHANNELS Recent studies have discovered various methods to extract the architecture of a black-box DNN. Memory and Timing Side-Channels: Hua et al. (2018) monitored off-chip memory accesses to extract the network architecture of a victim model running on a hardware accelerator. They estimated the possible architecture configurations and extracted model parameters. However, the attack requires physical accesses to the hardware, whereas our attack does not. Power Side-Channel: Wei et al. (2018) demonstrated that an attacker can recover an input image from collected power traces without knowing the detailed parameters in the victim network. However, this approach also assumed an attacker who knows the architecture of a victim network, so our attack could help meet the assumptions of this attack as well. Cache Side-Channel: Concurrent work by Yan et al. (2018) demonstrates that an attacker can reveal the architecture details by reverse engineering and attacking generalized matrix multiply (GeMM) libraries. However, GeMM-based reverse engineering can only reveal the number of parameters of convolutional or fully connected layers because others such as activation and pooling layers are difficult to characterize by matrix multiplications. Also, in order for the monitored functions in GeMM libraries to be in a shared instruction cache of an attacker and a victim, the multiplications must occur on the CPU. However, DeepRecon can be performed independent of the hardware on which the computations occur, generalizing better common hardware on which DNNs run (e.g., GPUs). Using Known Student Models: Wang et al. (2018) proposed a transfer learning technique in which an attacker identifies teacher models by using known student models available from the Internet. This approach assumed that the victim selects the teacher from a set of known architectures. We, however, take this a step further and fingerprint families of architectures as well as many commonly known teacher models. Additionally, we are able to reconstruct arbitrary teacher model architectures with high accuracy. Meta-Models: Oh et al. (2018) demonstrated that an attacker can estimate the victim’s architecture by using a brute-force approach and meta-models. They first trained all the possible architectures of a given set and pruned the models with inferior performance. Then, they trained a meta-model that identified the network architecture using mutated samples and labels. However, the pruning process is time intensive (i.e., 40 GPU days for 10k candidates of LeNet (LeCun, 1998)), and the candidates were selected from limited architectural choices, whereas we again go a step further in identifying families of architectures and can generalize to previously unknown teacher models. 3 DEEPRECON ATTACK 3.1 THREAT MODEL Our threat model requires an attacker who can launch a co-located user-level process on the same host machine as the victim. This ensures the attacker and the victim’s process share the same instruction cache. This co-location also allows our attacker to observe the victim DNN’s behavior without actively querying the model, avoiding the common assumption of query access in the literature on black-box attacks. Consider the example of any computer that an attacker has access to at a user-level, the attacker can log into this machine and attack other users with DeepRecon. Another way for an attacker to achieve co-location is to disguise her process as a benign program such as an extension for a browser. Once some victims install the extension in their browser, the attacker can easily launch a monitoring process. We also assume that the attacker and victim use the same opensource DL frameworks shared across users. Importantly, this assumption can be easily met because many popular DL frameworks such as Tensorflow (Abadi et al., 2016) or PyTorch1 are provided as open-source libraries, and this practice of sharing libraries across users is default on major operating systems, e.g., Windows, MacOS, and Ubuntu. Thus, our attacker can identify the addresses of functions to monitor in the instruction cache by reverse-engineering the shared framework’s code. Motivating Attack Example: We provide a practical example where our threat model is applicable. Suppose an attacker aims to install malware on a victim’s machine where an anti-virus system, based on a DNN model, is running. To evade malware detection in common black-box attacks such as the attack proposed in Ilyas et al. (2018), an attacker needs to drop crafted programs actively to monitor the model’s decisions and synthesize an evasive sample based on the collected data. However, when the attacker drops multiple files, her behavior can be detected by the victim. This is further amplified by the need to query the model repeatedly to craft any more malicious files. On the other hand, our attacker induces the victim to install a chrome add-on (which runs at a userlevel) that passively monitors cache behaviors of the model and extracts the architecture. Then, the attacker trains a surrogate model with public datasets (including malware and benign software). With the surrogate model, the attacker crafts her malware that evades detection and can continue to craft malicious files that will be classified as benign offline and without any further observations. As opposed to common black box attacks, our attacker lowers the possibility of being caught because she only monitors the victim model while it is in use and does not need to query the model. 3.2 ATTACK OVERVIEW The overview of DeepRecon attack is described in Fig. 1. The victim’s behaviors are depicted with the dotted lines (black), and the attacker’s actions are described with the solid lines (red). While preparing the attack offline, the attacker first analyzes the deep learning framework that the victim uses and collects the target functions corresponding to the architecture attributes that the attacker wants (Table 1). Then later, the attacker launches a co-located process at the user-level that runs along with the victim’s process on the same host machine. When the victim’s process runs training 1https://pytorch.org or predictions with its model, the target functions are invoked and the instructions that call them are loaded into the shared instruction cache. The attacker periodically flushes the cache lines and measures the access time to the target instructions. If the victim invokes any of the target functions after flushing, the following access time measured by the attacker will be measurably faster than if the victim does not invoke them. The attacker collects the number and sequence of invocations and then extracts the victim model’s architecture attributes. Then, the attacker reconstructs the victim model’s architecture. 3.3 REVERSE ENGINEERING In Table 1, we analyze the TensorFlow v1.9.0-rc0 framework2 and list the target functions corresponding to the architecture attributes. We choose TensorFlow due to its popularity as an open source machine learning (ML) framework, but believe that the methods we describe will be applicable to most, if not all, other popular frameworks. In addition to having found some corresponding functions in another popular framework, PyTorch/Caffe2, our attack leverages the inherent structure of a scalable and widely deployable ML framework, namely library layer abstraction. All the of the functions we monitor in TensorFlow are in the core of the library, both below the API interface and above the system dependent code. Because of this, our attack not only does not depend on the specific TensorFlow API a victim uses but also is agnostic to the type of processing hardware the victim is using, from a single CPU to a cluster of GPUs. The specific functions we monitor in Table 1 represent two subgroups: those corresponding to control flow and those corresponding to architecture attributes. The control flow functions allow us to observe the number of queries the victim makes to the model and the number of layers that are updated by gradient decent if we observe the model when it is being trained The function that monitors the number of queries is especially important, as it allows us to separate individual observations. The architecture attribute functions are called once per instance of an architecture attribute being present in the neural network, allowing us to see the number of each attribute and the sequence in which they occur in the victim’s architecture. Combined, these functions allow us to observe the architecture attributes of a neural network from start to finish on a given observation. Additionally, the bias operator gradient function, given in the table by #grads, can allow an attacker to figure out the total number of layers that are updated during training time if the attacker observes 2https://github.com/tensorflow/tensorflow/releases/tag/v1.9.0-rc0 Type Code Stage Func. Name Location in TensorFlow Code Control Flow #queries T/I RunCallable() core/common runtime/session ref.cc [line: 154] #grads T compute() core/kernels/bias op.cc [line: 218] Arch. Attributes #convs T/I operator() core/kernels/conv ops.cc [line: 122] #fcs T/I compute() core/kernel/matmul op.cc [line 451] #softms T/I compute() core/kernels/cwise ops common.h [line: 240] #relus T/I compute() core/framework/numeric op.h [line: 58] #mpools T/I compute() core/kernels/pooling ops common.h [line: 109] #apools T/I compute() core/kernels/avgpooling op.cc [line: 76] #merges T/I compute() core/kernels/cwise ops common.h [line: 91] #biases T/I compute() core/kernels/bias op.cc [line: 98] (Note that T stands for the training, and I indicates the inference.) Table 1: Target Functions. The monitored functions in the TensorFlow framework (v1.9.0-rc0). Each function corresponds to a control flow or an attribute. [Note that the codes are the number of queries (#queries), gradient updates (#grads), convolutional layers (#convs), fully connected layers (#fcs), softmaxs (#softms), ReLUs (#relus), max poolings (#mpools), avg. poolings (#apools), merge operations (#merges), bias operations (#biases).] the training of the model. Using this information, the attacker, already knowing the total number of layers in the architecture, can find the point at which the victim is freezing the backpropagation. This allows the attacker to know which layers are directly inherited from the training model and which layers are specifically trained by the victim. The relevance of this will be discussed in our application of DeepRecon to model fingerprinting (Sec. 4). Limitations. Similar to concurrent work (Yan et al., 2018), we are also able to extract additional information, such as the number of parameters in convolutional and fully connected layers by monitoring the matrix multiplications in the Eigen library3 on which TensorFlow is built. This attack provides more fine-grained information, but it does not generalize to computations on hardware other than a CPU. Also, we examine whether our attack can recover the inputs to the model and its parameters. By varying inputs and parameters while monitoring the functions used to compute these parameters using a code coverage tool, GCOV4, we find that the framework implements matrix multiplications of parameters in a data-independent way. Thus, we are unable to estimate the inputs and parameters of a victim model. We hypothesize that this is a general limit of cache based side-channel attacks on DNNs that target instructions, and that obtaining the parameters is reducible to the problem of reading arbitrary victim memory. 3.4 EXTRACTING ARCHITECTURE ATTRIBUTES We run our attack on Ubuntu 16.04 running on a host machine equipped with the i7-4600M processor (8 cores and 4MB L3 cache). Our victim and attacker processes are running at the user-level on the same operating system (OS). Both the processes utilize the TensorFlow v1.9.0-rc0 framework. The victim uses the DNN model to make predictions, and the attacker launches the Flush+Reload attack using the Mastik toolkit (Yarom, 2016) to monitor the target functions at the same time. A total of 13 convolutional neural network (CNN) architectures are considered in our experiment: DenseNet121, 169, 201 (Huang et al., 2017), VGG16, 19 (Simonyan & Zisserman, 2014), ResNet50, 101, 152 (He et al., 2016), InceptionV3, InceptionResNet (Szegedy et al., 2015), Xception (Chollet, 2017), MobileNetV1, and MobileNetV25 (Howard et al., 2017). Table 2 includes the extraction results from monitoring VGG16 and Resnet50. The full extraction results from the 13 networks are in Appendix D. We first show the results from a Short attack, where an attacker can only run her process on a short interval of time, observing only a single query of the 3http://eigen.tuxfamily.org/index.php 4https://gcc.gnu.org/onlinedocs/gcc/Gcov.html 5Note that we use alpha = 1.0 for both. network. We randomly choose ten individual queries and average the attributes. We report errors as the sum of absolute deviations from ground truths. In VGG19, our attacker has 2.6 errors on average and 3.1 in ResNet50. We also show the extraction results from 10 continuous observations (L), in which the attacker runs her process for a more extended period of time. The error rates in both the networks are similar. These results demonstrate that DeepRecon achieves better accuracy by only observing a running network than prior work that assumes query access (Oh et al., 2018). 3.5 RECONSTRUCTING THE ARCHITECTURE OF BLACK-BOX DEEP NEURAL NETWORKS Based on the extracted architecture information, DeepRecon reconstructs the entire DNN architecture of the victim model. In these examples, we focus on the fact that most CNN architectures consist of the feature extractor and classifier layers. The feature extractor is located in the earlier layers and is the combination of basic building blocks. The classifier is a set of fully connected layers at the end of the network. In VGGs and ResNets, there are standard blocks used in each CNN architecture as we depict in Fig. 2. Each block includes activation layers with preceding convolutional layers. In the classifier layers, each fully connected layer is followed by an activation layer. We describe the reconstruction process of ResNet50 in Table 3. (Note that we also reconstructed VGG16 without errors and show the result in Appendix A.) In this table, we compare the computation sequences observed by the attacker with the actual computations in ResNet50. We can see the sequences are accurately captured with a few errors. The three steps at the bottom describe the reconstruction processes of our attacker. Our attacker first identifies (1) the number of blocks by counting the (max-)pooling layers. Once the attacker separates blocks with the pooling layer locations, she counts (2) the number of convolutional layers in each block. In ResNets, we know that the Residual block has four convolutional layers, and the Identity block has three convolutional layers in each. Thus, the attacker can identify the type of each block. After that, the attacker estimates (3) the number of fully connected layers at the end. Finally, with this block-level information, our attacker successfully estimates the victim architecture is the ResNet50 with high accuracy. Discussion about the Reconstruction Errors. We also examine whether the errors in our experiments have specific patterns, allowing our attacker to filter them out. However, we could not find any pattern: the types and locations of error attributes are different in each run (over 10 runs). Thus, Arch. Data Computations Sequences (Layers in the Ground Truth) ResNet50 G CR PM CR CR C C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C C MR CR CR C MR CR CR C MR PA FSo S CR PM C CR CR C MR CR CR C MR CR CR C MR C CR CR C MR CR CR ... MR CR CR C MR CR CR C MR C CR CR C MR CR CR C MR CR CR C MR CR CR C MR CR CR C MR R CR C MR C CR CR C MR CR CR C MR CR CR C MR PA FSo Recon. Steps Details ResNet50 Recon. (1) Block 1. Block 2. Block 3 Block 4. Block 5. Block 6. Block 7. Block 8. Block 9. Block 10. Block 11. Block 12. Block 13. Block 14. Block 15. Block 16. Block 17. Block 18. (2) Input Residual Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Identity Block Identity Block Identity Block Residual Block Identity Block Identity Block Fully Connecteds (3) ResNet 50: Configuration with 50-Layers (Note that C,P ,F ,M indicate the Convolutional, Pooling, Fully connected, Merge layers, and the subscripts mean the activations (R: ReLU and So: Softmax).) Table 3: Reconstruction Process of ResNet50 Architecture. We list the computation sequences captured by our attack in the above rows and the reconstruction process at the bottom rows. The errors in capturing the correct computation sequences by our attacker are marked as bold and red. we attribute these errors to two primary causes. First, there can be background noise from other processes that our Flush+Reload attack picks up, e.g., a process can pull data into the L3 cache and evict the target function between when the victim calls the function and we reload it. In this case, our attacker cannot observe the victim calling the function. Second, our attack can experience common errors associated with the Flush+Reload attack (Yarom & Falkner, 2014), e.g., a victim invokes the target function when we reload, causing our attacker to see a cache miss instead of correctly observing a cache hit. 4 FINGERPRINTING BLACK-BOX NEURAL NETWORKS Our attacker identifies victim neural network architectures using statistical models trained on the attributes as features and the architectures as labels. This is a powerful capability to have if our attacker aims to fingerprint the architecture of the pre-trained models used in transfer learning. Transfer learning is typically done in a fine-tuned manner: a user creates a student model that uses the architecture and parameters of a teacher model and trains only a few fully connected layers at the end of the network by freezing the backpropagation of the preceding layers. We also found that our attacker can learn the layers in the student model not updated during training when they observe the training process (Sec. 3.3). Thus, our attacker can extract both the network architecture and frozen layers of the student (victim) model. Once our attacker identifies the victim network’s teacher model with this information, the attacker can utilize several pre-trained models available from the Internet to perform further attacks. For instance, an attacker can increase the success rate of black-box attacks by crafting adversarial samples with the internal representation from the teacher models (Wang et al., 2018). Additionally, since adversarial samples transfer across different models for the same task (Tramèr et al., 2017), the attacker is not required to use the exact same teacher model—i.e., she can use any pre-trained model of an architecture family that achieves similar accuracy on a task. An attacker can also perform model extractions easily. This is because the model parameters from the pre-trained models can be readily found on the Internet as well and are used in a victim model in this setting. Finally, if the attacker has a partial knowledge of the victim’s training data and can gain the knowledge of which layers were frozen during training (see Sec. 3.3), she can fully estimate the parameters of the entire network by independently training the last few layers that were not frozen. To evaluate our fingerprinting attack, we train decision tree classifiers on the 13 networks used in Sec. 3.4 to identify the network architectures using the extracted attributes and labels. We extract the attributes over 50 observations of each network (650 in total) and utilize 5-fold cross-validations. We Task Networks Acc. [Avg.] Important Attributes Total - 1.0 [0.9046] #relus [0.2575] #merges [0.2534] #convs [0.2497] #biases [0.1034] Family - 1.0 [0.9938] #relus [0.4621] #convs [0.4421] #mpools [0.3382] #apools [0.2752] Arch. Variants V 1.0 [0.9867] #relus [0.6982] #convs [0.6982] #biases [0.6898] - R 1.0 [0.9900] #relus [0.6399] #merges [0.6399] #convs [0.6399] #biases [0.3750] D 1.0 [0.9867] #relus [0.6399] #merges [0.6399] #convs [0.6100] - I 1.0 [1.0000] #convs [0.6399] #merges [0.6399] #apools [0.5875] #biases [0.3373] M 1.0 [1.0000] #relus [0.6982] #convs [0.6982] #fcs [0.6595] #softms [0.6228] (Note that V, R, D, I, and M indicate VGGs, ResNets, DenseNets, InceptionNets, and MobileNets.) Table 4: Fingerprinting Performance and Important Attributes. Each row corresponds to each task. We list the accuracy of the best classifiers and the essential attributes based on the MI scores, denoted by the numbers in brackets. measure the classification accuracy and analyze the four most essential attributes based on mutual information (MI) scores. Since the attributes are not affected by the host machines or operating systems, the attacker can train the models offline for use in attacks. Table 4 shows the results of fingerprinting the neural networks. We conduct three types of classification tasks with the aim of identifying 1) the entire 13 networks, 2) 5 network families, and 3) architecture variants in each network6. We report the accuracy of best decision trees and the average accuracy over the cross-validations. In all the tasks, our decision trees achieve 100% accuracy, which demonstrates, once trained, these statistical models can be perfect predictors. (Note that we also visualize our data in an attribute space via PCA analysis in Appendix C). We also identified the four essential attributes across all the classifications: 1) #relus, 2) #merges, 3) #convs, and 4) #apools. Identifying these influential attributes can guide a potential obfuscation-based defensive strategy against such side-channel attacks. 5 DEFENSES TO DEEPRECON ATTACK Previous studies on defenses against cache side-channel attacks (Kong et al., 2013; Zhou et al., 2016) require specific hardware (page-locked cache) or kernel-level features (page coloring). These solutions have not been widely deployed or have remained as optional features because of their impact on computational performance. Hence, we propose framework-level defenses that do not require specialized hardware or kernel-level updates. Our findings in Sec. 4 regarding the essential architecture attributes for an attack, e.g., #relus, #convs, and #merges, guide our search for defenses. As a result, we propose obfuscating the attacker’s observations of these attributes. We show that these defenses significantly reduce the success of our DeepRecon attack, and they can be extended to protect against a more general class cache side-channel attacks against DL frameworks. 5.1 RUNNING DECOY PROCESSES WITH TINY MODELS DeepRecon and other potential cache side-channel attacks on deep learning frameworks can only observe that a library function is called, but not by whom. By running an extra process (i.e., a decoy process) simultaneously with the actual process, we develop a simple but effective defensive strategy. The decoy process also invokes the target functions in the shared framework, which obfuscates the architecture attributes and computation sequences. To minimize the computational overhead, we utilize networks that only have a few layers, referred to as TinyNets. To evaluate our defense, we train these TinyNets at the same time as the victim’s process is running ResNet50, and we measure the number of errors in the extracted attributes. The results are listed in Table 5. We experiment with three TinyNets: 1) only with a Conv. layer (C:1), 2) Conv. with a ReLU layer (C:1, R:1), and 3) two Conv. and ReLU layers with a Merge layer. We found the existence of a decoy process significantly hinders the attacker’s ability to extract correct attributes, causing 1283-2211 errors in the attribute extractions. Contrasted with up to 2.9 errors on average from DeepRecon’s previous extractions (Section 3.4), we find that this defense is 6In the task 3), we consider MobileNet and MobileNetV2 as the same family. Network Arch. Attributes Errors Time #convs #fcs #softms #relus #mpools #apools #merges #biases ResNet50 - 54.5 1 1 48.9 1.1 1 16 49.8 2.9 17.88 ResNet50 + TinyNets C:1 368.75 1.05 1.05 47.05 0.95 1.00 687.75 347.80 1282.40 23.79 C:1 R:1 360.00 1.15 1.15 394.00 1.00 1.00 675.95 350.80 1612.05 23.85 C:2 R:2 M:1 414.55 1.00 1.00 715.10 1.10 1.00 782.25 468.25 2211.25 26.26 Table 5: Effectiveness of the Decoy Process. We compare the 8 attributes extracted from 10 runs, average errors, and average time with and without TinyNets. Note that C refers to the number of convolutional layers, R refers to the number of relu activation layers, and M refers to the number of merge layers. exceedingly effective at curbing this type of reconstruction attack. We also show that we can increase the errors associated with the attributes that we aim to obfuscate. For instance, when we run the TinyNet with only one convolutional layer, we observe the #convs is significantly increased. This is important because, with our defenses, a defender can choose the attributes to obfuscate. Since the defender can control what noise gets introduced, they can also dynamically and adaptively change what noise is added into the attackers observations, thereby increasing our defenses effectiveness and generalizability. To quantify the overhead required of this defense, we measure the average network inference time with and without a decoy process, and we observe that the defense increases the inference time by only 5.91-8.38 seconds per inference. Thus, this defense is a reasonable measure to combat cache side-channel attacks that reconstruct DNNs. 5.2 OBLIVIOUS MODEL COMPUTATIONS Another defense against DeepRecon is to obfuscate the order and number of the computations (i.e., function invocations) observed by our attacker using oblivious model computations. We propose two approaches. First, we can update the victim’s architecture by adding extra layers. To minimize the side-effects such as performance loss, these layers return the output with the same dimensions as the input. For instance, the convolutional layer with kernel size 3, padding size 1, and strides of length 1 can preserve the input dimensions, and the identity block of the ResNets preserves this as well. Thus, we augment the original architecture by adding such layers at the random location to make the same architecture look different in the attacker’s point of view. Prior work (Targ et al., 2016) has shown the unraveled view of the ResNet architecture. Under this view, the skip-connections of a network can be expressed as the ensemble of multiple computational paths that can be computed independently. Hence, we try splitting a computational path with skipconnections into multiple paths without skip-connections. In forward propagation, the multiple paths are randomly chosen and computed so that our attacker finds it difficult to capture the exact architecture. To evaluate our intuition, we construct the obfuscated architecture of ResNet50 (see Appendix B) and extract the attributes by using our attack. Our results are shown in Table 6. Using this defense, the errors detected by DeepRecon increased from 2-3 to 28 for ResNet50. During this test, the first 3 blocks over the entire 16 blocks of ResNet50 are obfuscated. While less effective than our previous defense, we conclude that this defense still can marginally obfuscate the observations of our attacker. Additionally, the gain on computational time is also small: it only increased from 17.88 to 24.03 seconds. 6 CONCLUSION This paper conducts the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. We first define the realistic threat model for these attacks: our attacker does not require the ability to query the victim model; she runs a co-located process on the machine where the victims DL system is running and passively monitors the accesses of target functions in a shared framework. We also present DeepRecon, an attack that reconstructs the architecture of a victim network using the architecture attributes extracted via the Flush+Reload technique. Based on the extracted attributes, we further demonstrate that an attacker can build a meta-model that precisely fingerprints the architecture and family of the pre-trained model in a transfer learning setting. With the meta-model, we identified the essential attributes for these attacks. Finally, we propose and evaluate new framework-level defense techniques that obfuscate our attackers observations. Our empirical security analysis represents a step toward understanding how DNNs are vulnerable to side-channel attacks. D ATTRIBUTES EXTRACTION RESULTS FROM OTHER NETWORKS We show the attributes extraction results with the other 11 networks in Table 9.
1. What is the focus of the paper regarding deep learning models? 2. What are the strengths and weaknesses of the proposed approach in terms of its significance and practical applications? 3. How does the reviewer assess the novelty and impact of the paper compared to prior works in the same area? 4. Are there any concerns or limitations regarding the experimental results and their interpretation? 5. Can the attack described in the paper be used for realistic attacks, and what are the potential countermeasures?
Review
Review The paper describes a cache side-channel attack on a deep learning model. In a cache side-channel attack, the attacker sets up a process on the same machine where the victim process (that is running the training or evaluation job for the DNN model) is running. It is assumed that the victim process uses a common shared library for DNN computations as the attacking process. The attacking process flushes the cache, then observes access times for key functions. The paper shows that, based on the speed of accessing previously flushed functions, the attacker can discover the high-level network architecture, namely the types of layers and their sequence. The paper shows that, by spying on such cache access patterns in the Tensorflow library, this method can reliably extract the above high-level information for 11 different network architectures. It also describes a few counterattack alternatives whereby the victim can obfuscate its cache access patterns for self-protection. The significance of the results is not clear to me. The extracted information is very high level. What realistic attacks can be constructed from such a coarse-grained fingerprinting? The experimental results show that the fingerprint can be used to map the architecture to one of the 13 well-known architectures (VCC16, ResNet, DenseNet, Inception, etc.). But so what? What does the victim lose by revealing that it's using one of a few very well known types of DNNs (the ones tested in this paper). There may very well be a good reason why this is very dangerous, but that is not explained in the paper. Not being familiar with this line of research and its significance, I looked up several of the related papers (Suciu et al., 2018, Tramer et al., 2017, Papernot et al., 2017, Yan et al., 2018). None of them could explain why this particular type of fingerprinting is dangerous. Of the cited previous work, Yan et al., 2018 seems to present the most closely related approach. The method described in that paper is very similar: cache side attack on a shared library through a co-located attacker process. They monitor at a finer grain -- Generalized Matrix Multiplications -- and are thus able to infer more details such as the size of the layers. This also makes the inference problem harder -- they were able to narrow down the search space of networks from >4x10^35 to 16 (on VGG16). On the surface, the results presented in this paper seem stronger. But they are actually solving a much easier problem -- their search space is one of 13 well-known networks. To me, Yan et al.'s approach is a much more powerful and promising setup. Overall, while the paper is clearly written and presents the idea succinctly, it is derivative of previous research, and the results are not stronger. I'm not an expert in this area, so it's possible that I missed something. Based on my current understanding, however, I recommend reject.
ICLR
Title Learning To Explore Using Active Neural SLAM Abstract This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called ‘Active Neural SLAM’. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of CVPR 2019 Habitat PointGoal Navigation Challenge. 1 INTRODUCTION Navigation is a critical task in building intelligent agents. Navigation tasks can be expressed in many forms, for example, point goal tasks involve navigating to a specific coordinates and semantic navigation involves finding path to a specific scene or object. Irrespective of the task, a core problem for navigation in unknown environments is exploration, i.e., how to efficiently visit as much of the environment. This is useful for maximizing the coverage to give the best chance of finding the target in unknown environments or for efficiently pre-mapping environments on a limited time-budget. Recent work from Chen et al. (2019) has used end-to-end learning to tackle this problem. Their motivation is three fold: a) learning provides flexibility to the choice of input modalities (classical systems rely on observing geometry through use of specialized sensors, while learning systems can infer geometry directly from RGB images), b) use of learning can improve robustness to errors in explicit state estimation, and c) learning can effectively leverage structural regularities of the real world, leading to more efficient behavior in previously unseen environments. This lead to their design of an end-to-end trained neural network based policy that processed raw sensory observations to directly output actions that the agent should execute. While use of learning for exploration is well motivated, casting the exploration problem as an end-to-end learning problem has its own drawbacks. Learning about mapping, state-estimation and path-planning purely from data in an end-to-end manner can be prohibitively expensive. Consequently, past end-to-end learning work for exploration from Chen et al. (2019) relies on use of imitation learning and many millions of frames of experience, but still performs worse than classical methods that don’t require any training at all. This motivates our work. In this paper, we investigate alternate formulations of employing learning for exploration that retains the advantages that learning has to offer, but doesn’t suffer from the drawbacks of full-blown end-to-end learning. Our key conceptual insight is that use of learning for leveraging structural regularities of indoor environments, robustness to state-estimation errors, and †Correspondence: [email protected] ∗Equal Contribution flexibility with respect to input modalities, happens at different time scales and can thus be factored out. This motivates use of learning in a modular and hierarchical fashion inside of what one may call a ‘classical navigation pipeline’. This results in navigation policies that can work with raw sensory inputs such as RGB images, are robust to state estimation errors, and leverage regularities of real world layout. This results in extremely competitive performance over both geometry-based methods and recent learning-based methods; at the same time requiring a fraction of the number of samples. More specifically, our proposed exploration architecture comprises of a learned Neural SLAM module, a global policy, and a local policy, that are interfaced via the map and an analytical path planner. The learned Neural SLAM module produces free space maps and estimates agent pose from input RGB images and motion sensors. The global policy consumes this free-space map with agent pose and employs learning to exploit structural regularities in layout of real world environments to produce long-term goals. These long-term goals are used to generate short-term goals for the local policy (using a geometric path-planner). This local policy uses learning to directly map raw RGB images to actions that the agent should execute. Use of learning in the SLAM module provides flexibility with respect to input modality, learned global policy can exploit regularities in layout of real world layout of environments, while learned local policies can use visual feedback to exhibit more robust behaviour. At the same time, hierarchical and modular design and use of analytical planning, significantly cuts down the search space during training, leading to better performance as well as sample efficiency. We demonstrate our proposed approach in visually and physically realistic simulators for the task of geometric exploration (visit as much area as possible). We work with the Habitat simulator from Savva et al. (2019). While Habitat is already visually realistic (it uses real world scans from Chang et al. (2017) and Xia et al. (2018) as environments), we improve its physical realism by using actuation and odometry sensor noise models, that we collected by conducting physical experiments on a real mobile robot. Our experiments and ablations in this realistic simulation reveal the effectiveness of our proposed approach for the task of exploration. A straight-forward modification of our method also tackles point-goal navigation tasks, and won the AI Habitat challenge at CVPR2019 across all tracks. 2 RELATED WORK Navigation has been well studied in classical robotics. There has been a renewed interest in the use of learning to arrive at navigation policies, for a variety of tasks. Our work builds upon concepts in classical robotics and learning for navigation. We survey related works below. Navigation Approaches. Classical approaches to navigation break the problem into two parts: mapping and path planning. Mapping is done via simultaneous localization and mapping (Thrun et al., 2005; Hartley and Zisserman, 2003; Fuentes-Pacheco et al., 2015), by fusing information from multiple views of the environment. While sparse reconstruction can be done well with monocular RGB images (Mur-Artal and Tardós, 2017), dense mapping is inefficient (Newcombe et al., 2011) or requires specialized scanners such as Kinect (Izadi et al., 2011). Maps are used to compute paths to goal locations via path planning (Kavraki et al., 1996; Lavalle and Kuffner Jr, 2000; Canny, 1988). These classical methods have inspired recent learning-based techniques. Researchers have designed neural network policies that reason via spatial representations (Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Zhang et al., 2017; Henriques and Vedaldi, 2018; Gordon et al., 2018), topological representations (Savinov et al., 2018a;b), or use differentiable and trainable planners (Tamar et al., 2016; Lee et al., 2018; Gupta et al., 2017; Khan et al., 2017). Our work furthers this research, and we study a hierarchical and modular decomposition of the problem, and employ learning inside these components instead of end-to-end learning. Research also focuses on incorporating semantics in SLAM (Pronobis and Jensfelt, 2012; Walter et al., 2013). Exploration in Navigation. While a number of works focus on passive map-building, path planning and goal-driven policy learning, a much smaller body of work tackles the problem of active SLAM, i.e., how to actively control the camera for map building. We point readers to Fuentes-Pacheco et al. (2015) for a detailed survey, and summarize major themes below. Most such works frame this problem as a Partially Observable Markov Decision Process (POMDP) that are approximately solved (Martinez-Cantin et al., 2009; Kollar and Roy, 2008), and or seek to find a sequence of actions that minimizes uncertainty of maps (Stachniss et al., 2005; Carlone et al., 2014). Another line of work, explores by picking vantage points (such as on the frontier between explored and unexplored regions (Dornhege and Kleiner, 2013; Holz et al., 2010; Yamauchi, 1997; Xu et al., 2017)). Recent works from Chen et al. (2019); Savinov et al. (2018b); Fang et al. (2019) attack this problem via learning. Our proposed modular policies unify the last two lines of research, and we show improvements over representative methods from both these lines of work. Exploration has also been studied more generally in RL in the context of exploration-exploitation trade-off (Sutton and Barto, 2018; Kearns and Singh, 2002; Auer, 2002; Jaksch et al., 2010). Hierarchical and Modular Policies. Hierarchical RL (Dayan and Hinton, 1993; Sutton et al., 1999; Barto and Mahadevan, 2003) is an active area of research, aimed at automatically discovering hierarchies to speed up learning. However, this has proven to be challenging, and thus most work has resorted to using hand-defining hierarchies. For example in context of navigation, Bansal et al. (2019) and Kaufmann et al. (2019) design modular policies for navigation, that interface learned policies with low-level feedback controllers. Hierarchical and modular policies have also been used for Embodied Question Answering (Das et al., 2018a; Gordon et al., 2018; Das et al., 2018b). 3 TASK SETUP We follow the exploration task setup proposed by Chen et al. (2019) where the objective is to maximize the coverage in a fixed time budget. The coverage is defined as the total area in the map known to be traversable. Our objective is train a policy which takes in an observation st at each time step t and outputs a navigational action at to maximize the coverage. We try to make our experimental setup in simulation as realistic as possible with the goal of transferring trained policies to the real world. We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and Matterport datasets are based on real-world scene reconstructions are thus significantly more realistic than synthetic SUNCG dataset (Song et al., 2017) used for past research on exploration (Chen et al., 2019; Fang et al., 2019). In addition to synthetic scenes, prior works on learning-based navigation have also assumed simplistic agent motion. Some works limit agent motion on a grid with 90 degree rotations (Zhu et al., 2017; Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Chaplot et al., 2018). Other works which implement fine-grained control, typically assume unrealistic agent motion with no noise (Savva et al., 2019) or perfect knowledge of agent pose (Chaplot et al., 2016). Since the motion is simplistic, it becomes trivial to estimate the agent pose in most cases even if it is not assumed to be known. The reason behind these assumptions on agent motion and pose is that motion and sensor noise models are not known. In order to relax both these assumptions, we collect motion and sensor data in the real-world and implement more realistic agent motion and sensor noise models in the simulator as described in the following subsection. 3.1 ACTUATION AND SENSOR NOISE MODEL We represent the agent pose by (x, y, o) where x and y represent the xy co-ordinate of the agent measured in metres and o represents the orientation of the agent in radians (measured counterclockwise from x-axis). Without loss of generality, assume agents starts at p0 = (0, 0, 0). Now, suppose the agent takes an action at. Each action is implemented as a control command on a robot. Let the corresponding control command be ∆ua = (xa, ya, oa). Let the agent pose after the action be p1 = (x?, y?, o?). The actuation noise ( act) is the difference between the actual agent pose (p1) after the action and the intended agent pose (p0 + ∆u): act = p1 − (p0 + ∆u) = (x? − xa, y? − ya, o? − oa) Mobile robots typically have sensors which estimate the robot pose as it moves. Let the sensor estimate of the agent pose after the action be p′1 = (x ′, y′, o′). The sensor noise ( sen) is given by the difference between the sensor pose estimate (p′1) and the actual agent pose(p1): sen = p ′ 1 − p1 = (x′ − x?, y′ − y?, o′ − o?) In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a LoCoBot1 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise, making a total of 6 models. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. For each model, we collect 600 datapoints. The number of components in each Gaussian mixture model are chosen using cross-validation. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. We have released the noise models, along with their implementation in the Habitat simulator in the open-source code. 4 METHODS We propose a modular navigation model, ‘Active Neural SLAM’. It consists of three components: a Neural SLAM module, a Global policy and a Local policy as shown in Figure 1. The Neural SLAM module predicts the map of the environment and the agent pose based on the current observations and previous predictions. The Global policy uses the predicted map and agent pose to produce a long-term goal. The long-term goal is converted into a short-term goal using path planning. The Local policy takes navigational actions based on the current observation to reach the short-term goal. Map Representation. The Active Neural SLAM model internally maintains a spatial map, mt and pose of the agent xt. The spatial map, mt, is a 2×M ×M matrix where M ×M denotes the map size and each element in this spatial map corresponds to a cell of size 25cm2 (5cm × 5cm) in the physical world. Each element in the first channel denotes the probability of an obstacle at the corresponding location and each element in the second channel denotes the probability of that location being explored. A cell is considered to be explored when it is known to be free space or an obstacle. The spatial map is initialized with all zeros at the beginning of an episode,m0 = [0]2×M×M . The pose xt ∈ R3 denotes the x and y coordinates of the agent and the orientation of the agent at time t. The agent always starts at the center of the map facing east at the beginning of the episode, x0 = (M/2,M/2, 0.0). Neural SLAM Module. The Neural SLAM Module (fMap) takes in the current RGB observation, st, the current and last sensor reading of the agent pose x′t−1:t, last agent pose and map estimates, x̂t−1,mt−1 and outputs an updated map, mt, and the current agent pose estimate, x̂t, (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM ), where θM denote the trainable parameters of the Neural SLAM module. It consists of two learned components, a Mapper and a Pose Estimator. The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation. The Pose Estimator predicts the agent pose (x̂t) based on past pose estimate (x̂t−1) and last two egocentric map predictions (pegot−1:t). It essentially compares the current egocentric map prediction to the last egocentric map prediction transformed to the current frame to predict the pose change between the two maps. The egocentric map from the Mapper is transformed to a geocentric map based on the pose estimate given by the Pose Estimator and then aggregated with the previous spatial map (mt−1) to get the current map(mt). More implementation details of the Neural SLAM module are provided in the Appendix. 1http://locobot.org Global Policy. The Global Policy takes ht ∈ [0, 1]4×M×M as input, where the first two channels of ht are the spatial map mt given by the SLAM module, the third channel represents the current agent position estimated by the SLAM module, the fourth channel represents the visited locations, i.e. ∀i, j ∈ {1, 2, . . . ,m}: ht[c, i, j] = mt[c, i, j] ∀c ∈ {0, 1} ht[2, i, j] = 1 if i = x̂t[0] and j = x̂t[1] ht[3, i, j] = 1 if (i, j) ∈ [(x̂k[0], x̂k[1])]k∈{0,1,...,t} We perform two transformations before passing ht to the Global Policy model. The first transformation subsamples a window of size 4×G×G around the agent from ht. The second transformation performs max pooling operations to get an output of size 4×G×G from ht. Both the transformations are stacked to form a tensor of size 8×G×G and passed as input to the Global Policy model. The Global Policy uses a 5-layer convolutional neural network to predict a long-term goal, glt in G×G space: glt = πG(ht|θG), where θG are the parameters of the Global Policy. Planner. The Planner takes the long-term goal (glt), the spatial obstacle map (mt) and the agnet pose estimate (x̂t) as input and computes the short-term goal gst , i.e. g s t = fPlan(g l t,mt, x̂t). It computes the shortest path from the current agent location to the long-term goal (glt) using the Fast Marching Method (Sethian, 1996) based on the current spatial map mt. The unexplored area is considered as free space for planning. We compute a short-term goal coordinate (farthest point within ds(= 1.25m) from the agent) on the planned path. Local Policy. The Local Policy takes as input the current RGB observation (st) and the short-term goal (gst ) and outputs a navigational action, at = πL(st, g s t |θL), where θL are the parameters of the Local Policy. The short-term goal coordinate is transformed into relative distance and angle from the agent’s location before being passed to the Local Policy. It consists of a 3-layer convolutional neural network followed by a GRU layer. 5 EXPERIMENTAL SETUP We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and MP3D consist of scenes which are 3D reconstructions of real-world environments, however Gibson is collected using a different set of cameras, consists mostly of office spaces while MP3D consists of mostly homes with a larger average scene area. We will use Gibson as our training domain, and use MP3D for domain generalization experiments. The observation space consists of RGB images of size 3× 128× 128 and base odometry sensor readings of size 3 × 1 denoting the change in agent’s x-y coordinates and orientation. The actions space consists of three actions: move_forward, turn_left, turn_right. Both the base odometry sensor readings and the agent motion based on the actions are noisy. They are implemented using the sensor and actuation noise models based on real-world data as discussed in Section 3.1. We follow the Exploration task setup proposed by Chen et al. (2019) where the objective to maximize the coverage in a fixed time budget. Coverage is the total area in the map known to be traversable. We define a traversable point to be known if it is in the field-of-view of the agent and is less than 3m away. We use two evaluation metrics, the absolute coverage area in m2 (Cov) and percentage of area explored in the scene (% Cov), i.e. ratio of coverage to maximum possible coverage in the corresponding scene. During training, each episode lasts for a fixed length of 1000 steps. We use train/val/test splits provided by Savva et al. (2019) for both the datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server for the Pointgoal task. We use the validation as the test set for comparison and analysis for the Gibson domain. We do not use the validation set for hyper-parameter tuning. To analyze the performance of all the models with respect to the size of the scene, we split the Gibson validation set into two parts, a small set of 10 scenes with explorable area ranging from 16m2 to 36m2, and a large set of 4 scenes with traversable area ranging from 55m2 to 100m2. Note that the size of the map is usually much larger than the traversable area, with the largest map being about 23m long and 11m wide. Training Details. We train our model in the Gibson domain and transfer it to the Matterport domain. The Mapper is trained to predict egocentric projections, and the Pose Estimator is trained to predict agent pose using supervised learning. The ground truth egocentric projection is computed using geometric projections from ground truth depth. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. All the modules are trained simultaneously. Their parameters are independent, but the data distribution is inter-dependent. Based on the actions taken by the Local policy, the future input to Neural SLAM module changes, which in turn changes the map and agent pose input to the Global policy and consequently affects the short-term goal given to the Local Policy. For more architecture and hyperparameter details, please refer to the supplementary material and the open-source code. Baselines. We use a range of end-to-end Reinforcement Learning (RL) methods as baselines: RL + 3LConv: An RL Policy with 3 layer convolutional network followed by a GRU (Cho et al., 2014) as described by Savva et al. (2019) which is also identical to our Local Policy architecture. RL + Res18: A RL Policy initialized with ResNet18 (He et al., 2016) pre-trained on ImageNet followed by a GRU. RL + Res18 + AuxDepth: This baseline is adapted from Mirowski et al. (2017) who use depth prediction as an auxiliary task. We use the same architecture as our Neural SLAM module (conv layers from ResNet18) with one additional deconvolutional layer for Depth prediction followed by 3 layer convolution and GRU for the policy. RL + Res18 + ProjDepth: This baseline is adapted form Chen et al. (2019) who project the depth image in an egocentric top-down in addition to the RGB image as input to the RL policy. Since we do not have depth as input, we use the architecture from RL + Res18 + AuxDepth for depth prediction and project the predicted depth before passing to 3Layer Conv and GRU policy. For all the baselines, we also feed a 32-dimensional embedding of the sensor pose reading to the GRU along with the image-based representation. This embedding is also learnt end-to-end using RL. All baselines are trained using PPO (Schulman et al., 2017) with increase in coverage as the reward. 6 RESULTS We train the proposed ANS model and all the baselines for the Exploration task with 10 million frames on the Gibson training set. The results are shown in Table 1. The results on the Gibson Val set are averaged over a total of 994 episodes in 14 different unseen scenes. The proposed model achieves an average absolute and relative coverage of 31.379m2/0.924 as compared to 24.958m2/0.766 for the best baseline. This indicates that the proposed model is more efficient and effective at exhaustive exploration as compared to the baselines. This is because our hierarchical policy architecture reduces the horizon of the long-term exploration problem as instead of taking tens of low-level navigational actions, the Global policy only takes few long-term goal actions. We also report the domain generalization performance on the Exploration task in Table 1 (see shaded region), where all models trained on Gibson are evaluated on the Matterport domain. ANS leads to higher domain generalization performance (57.228m2/0.405 vs 41.549m2/0.297). The abosulte coverage is higher for the Matterport domain as it consists of larger scenes on average. Some visualizations of policy execution are provided in Figure 42. In Fig. 3, we plot the relative coverage (% Cov) of all the models as the episode progresses on the large and small scene sets, as well as the overall Gibson Val set. The plot on the small scene set shows that ANS is able to almost completely explore the small scenes in around 500 steps, however the baselines are only able to explore 85% of the small scenes in 1000 steps (see Fig. 3 center). This indicates that ANS explores more efficiently in small scenes. The plot on the large scenes set shows that the performance gap between ANS and baselines widens as the episode progresses (see Fig. 3 left). Looking at the behaviour of the baselines, we saw that they often got stuck in local areas. This behaviour indicates that they are unable to remember explored areas over long-time horizons and are ineffective at long-term planning. On the other hand, ANS uses a Global policy on the map which allows it to have memory of explored areas over long-time horizons, and plan effectively to reach distant long-term goals by leveraging analytical planners. As a result, it is able to explore effectively in large scenes with long episode lengths. 6.1 ABLATIONS Local Policy. An alternative to learning a Local Policy is to have a deterministic policy which follows the plan given by the Planner. As shown in Table 2, the ANS model performs much worse without the Local Policy. The Local Policy is designed to adapt to small errors in Mapping. We observed Local policy overcoming both false positives and false negatives encountered in mapping. For example, the Neural SLAM module could sometime wrongly predict a carpet as an obstacle. In this case, the planner would plan to go around the carpet. However, if the short-term goal is beyond the carpet, the Local policy can understand that the carpet is not an obstacle based on the RGB observation and learn to walk over it. Similarly, we also observed cases where the Neural SLAM module didn’t predict small obstacles very close to the agent as they were not in the field-of-view due to the height of the camera. In this case, the planner would plan a path through the obstacle where the deterministic policy would get stuck. Since the local policy is recurrent, it learns to navigate around these obstacles by getting feedback from the environment. When the policy tries to move forward but it can not, it gets feedback that there must be an obstacle. 2See https://devendrachaplot.github.io/projects/Neural-SLAM for visualization videos. Global Policy. An alternative to learning a Global Policy for sampling long-term goals is to use a classical algorithm called Frontier-based exploration (Yamauchi, 1997). A frontier is defined as the boundary between the explored free space and the unexplored space. Frontier-based exploration essentially sample points on this frontier as goals to explore the space. There are different variants of Frontier-based exploration based on the sampling strategy. Holz et al. (2010) compare different sampling strategies and find that sampling the point on the frontier closest to the agent gives the best results empirically. We implement this variant and replace it with our learned Global Policy. As shown in Table 2, Frontier-based exploration policy perform worse than the Global Policy. We observed that Frontier-based exploration spent a lot of time exploring corners or small area behind furniture. In contrast, the trained Global policy ignored small spaces and chose distant long-term goals which led to exploring more area. Pose Estimation. A difference between ANS and the baselines is that ANS uses additional supervision to train the Pose Estimator. In order to understand whether the performance gain is coming from this additional supervision, we remove the Pose Estimator from ANS and just use the input sensor reading as our pose estimate. Results in Table 2 show that the ANS still outperforms the baselines even without the Pose Estimator. Furthermore, passing the ground truth pose as input the baselines instead of the sensor reading did not improve the performance of the baselines. 6.2 REAL-WORLD TRANSFER We deploy the trained ANS policy on a Locobot in the real-world. In order to match the real-world observations to the simulator observations as closely as possible, we change the simulator input configuration to match the camera intrinsics on the Locobot. This includes the camera height and horizontal and vertical field-of-views. In Figure 5, we show an episode of ANS exploring the living area in an apartment. The figure shows that the policy transfers well to the real-world and is able to effectively explore the environment. The long-term goals sampled by the Global policy (shown by blue circles on the map) are often towards open spaces in the explored map, which indicates that it is learning to exploit the structure in the map. Please refer to the project webpage for real-world transfer videos. 6.3 POINTGOAL TASK TRANSFER. PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. In this task, each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), Success weighted by (normalized inverse) Path Length or SPL is also used as a metric for evaluation as proposed by Anderson et al. (2018). All the baseline models trained for the task of Exploration either need to be retrained or atleast finetuned to be transferred to the Pointgoal task. The modularity of ANS provides it another advantage that it can be transferred to the Pointgoal task without any additional training. For transfer to the Pointgoal task, we just fix the Global policy to always output the PointGoal coordinates as the long-term goal and use the Local and Mapper trained for the Exploration task. We found that an ANS policy trained on exploration, when transferred to the Pointgoal task performed better than several RL and Imitation Learning baselines trained on the Pointgoal task. The transferred ANS model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. The ANS model also generalized significantly better than the baselines to harder goals and to the Matterport domain. In addition to better performance, ANS was also 10 to 75 times more sample efficient than the baselines. This transferred ANS policy was also the winner of the CVPR 2019 Habitat Pointgoal Navigation Challenge for both RGB and RGB-D tracks among over 150 submissions from 16 teams. These results highlight a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. More details about the Pointgoal experiments, baselines, results including domain and goal generalization on the Pointgoal task are provided in the supplementary material. 7 CONCLUSION In this paper, we proposed a modular navigational model which leverages the strengths of classical and learning-based navigational methods. We show that the proposed model outperforms prior methods on both Exploration and PointGoal tasks and shows strong generalization across domains, goals, and tasks. In future, the proposed model can be extended to complex semantic tasks such as Semantic Goal Navigation and Embodied Question Answering by using a semantic Neural SLAM module which creates multi-channel map capturing semantic properties of the objects in the environment. The model can also be combined with prior work on Localization to relocalize in a previously created map for efficient navigation in subsequent episodes. ACKNOWLEDGEMENTS This work was supported by IARPA DIVA D17PC00340, ONR Grant N000141812861, ONR MURI, ONR Young Investigator, DARPA MCS and Apple. We would also like to acknowledge NVIDIA’s GPU support. Licenses for referenced datasets. Gibson: http://svl.stanford.edu/gibson2/assets/GDS_agreement.pdf Matterport3D: http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf A POINTGOAL EXPERIMENTS PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. We follow the PointGoal task setup from Savva et al. (2019), using train/val/test splits for both Gibson and Matterport datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server3. We report the performance of our model on the Gibson test set when submitted to the online server but also use the validation set as another test set for extensive comparison and analysis. We do not use the validation set for hyper-parameter tuning. Savva et al. (2019) identify two measures to quantify the difficulty of a PointGoal dataset. The first is the average geodesic distance (distance along the shortest path) to the goal location from the starting location of the agent, and the second is the average geodesic to Euclidean distance ratio (GED ratio). The GED ratio is always greater than or equal to 1, with higher ratio resulting in harder episodes. The train/val/test splits in Gibson dataset come from the same distribution of having similar average geodesic distance and GED ratio. In order to analyze the performance of the proposed model on out-of-set goal distribution, we create two harder sets, Hard-Dist and Hard-GEDR. In the Hard-Dist set, the geodesic distance to goal is always more than 10m and the average geodesic distance to the goal is 13.48m as compared to 6.9/6.5/7.0m in train/val/test splits (Savva et al., 2019). Hard-GEDR set consists of episodes with an average GED ratio of 2.52 and a minimum GED ratio of 2.0 as compared to average GED ratio 1.37 in the Gibson val set. We also follow the episode specification from Savva et al. (2019). Each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), we also use Success weighted by (normalized inverse) Path Length or SPL as a metric for evaluation for the PointGoal task as proposed by Anderson et al. (2018). A.1 POINTGOAL RESULTS In Table 3, we show the performance of the proposed model transferred to the PointGoal task along with the baselines trained on the PointGoal task with the same amount of data (10million frames). The proposed model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. We also report the performance of the proposed model trained from scratch on the PointGoal task for 10 million frames. The results indicate that the performance of ANS transferred from Exploration is comparable to ANS trained on PointGoal. This highlights a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. 3https://evalai.cloudcv.org/web/challenges/challenge-page/254 Sample efficiency. RL models are typically trained for more than 10 million samples. In order to compare the performance and sample-efficiency, we trained the best performing RL model (RL + Res18 + GRU + ProjDepth) for 75 million frames and it achieved a Succ/SPL of 0.678/0.486. ANS reaches the performance of 0.789/0.703 SPL/Succ at only 1 million frames. These numbers indicate that ANS achieves > 75× speedup as compared to the best RL baseline. Domain and Goal Generalization: In Table 3 (see shaded region), we evaluate all the baselines and ANS trained on the PointGoal task in the Gibson domain on the test set in Matterport domain as well as the harder goal sets in Gibson. We also transfer ANS trained on Exploration in Gibson on all the 3 sets. The results show that ANS outperforms all the baselines at all generalization sets. Interestingly, RL based methods almost fail completely on the Hard-Dist set. We also analyze the performance of the proposed model as compared to two best baselines CMP and IL + Res18 + GRU as a function of geodesic distance to goal and GED ratio in Figure 7. The performance of the baselines drops faster as compared to ANS, especially with increase in goal distance. This indicates that end-to-end learning methods are effective at short-term navigation but struggle when long-term planning is required to reach a distant goal. In Figure 8, we show some example trajectories of the ANS model along with the predicted map. The successful trajectories indicate that the model exhibits strong backtracking behavior which makes it effective at distant goals requiring long-term planning. Figure 9 visualizes a trajectory in the PointGoal task show first-person observation and corresponding map predictions. Please refer to the project webpage for visualization videos. Habitat Challenge Results. We submitted the ANS model to the CVPR 2019 Habitat Pointgoal Navigation Challenge. The results are shown in Figure 6. ANS was submitted under code-name ‘Arnold’. ANS was the winning entry for both RGB and RGB-D tracks among over 150 submissions from 16 teams, achieving an SPL of 0.805 (RGB) and 0.948 (RGB-D) on the Test Challenge set. B NOISE MODEL IMPLEMENTATION DETAILS In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a Locobot 4 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. In order to get an accurate agent pose, we use an Hokuyo UST-10LX Scanning Laser Rangefinder (LiDAR) which is especially very precise in our scenario as we take static readings in 2D (Kohlbrecher et al., 2011). We install the LiDAR on the Locobot by replacing the arm with the LiDAR. We note that the Hokuyo UST-10LX Scanning Laser Rangefinder is an expensive sensor. It costs $1600 as compared to the whole Locobot costing less than $2000 without the arm. Using expensive sensors can improve the performance of a model, however for a method to be scalable, it should ideally work with cheaper sensors too. In order demonstrate the scalability of our method, we use the LiDAR only to collect the data for building noise models and not for training or deploying navigation policies in the real-world. For the sensor estimate, we use the Kobuki base odometry available in Locobot. We approximate the LiDAR pose estimate to be the true pose of the agent as it is orders of magnitude more accurate than the base sensor. For each action, we collect 600 datapoints from both the base sensor and the LiDAR, making a total of 3600 datapoints (600 ∗ 3 ∗ 2). We use 500 datapoints for each action to fit the actuation and sensor noise models and use the remaining 100 datapoints for validation. For each action a, the LiDAR pose estimates gives us samples of p1 and the base sensor readings give us samples of p′1, i = 1, 2, . . . , 600. The difference between LiDAR estimates (p i 1) and control command (∆ua) gives us samples for the actuation noise for the action a: iact,a = p i 1 −∆ua and difference between base sensor readings and LiDAR estimates gives us the samples for the sensor noise, isen,a = p i′ 1 − pi1. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise using samples iact,a and i sen,a respectively, making a total of 6 models. We fit Gaussian mixture models with number of components ranging from 1 to 20 for and pick the model with highest likelihood on the validation set. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. 4http://locobot.org C NEURAL SLAM MODULE IMPLEMENTATION DETAILS The Neural SLAM module (fMap) takes in the current RGB observation, st ∈ R3×H×W , the current and last sensor reading of the agent pose x′t−1:t and the map at the previous time step mt−1 ∈ R2×M×M and outputs an updated map, mt ∈ R2×M×M (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM , bt−1) where θM denote the trainable parameters and pt−1 denotes internal representations of the Neural SLAM module. The Neural SLAM module can be broken down into two parts, a Mapper (fPr) and a Pose Estimator Unit (fPE ,). The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation: pegot = fPr(st|θPr), where θPr are the parameters of the Mapper. It consists of Resnet18 convolutional layers to produce an embedding of the observation. This embedding is passed through two fully-connected layers followed by 3 deconvolutional layers to get the first-person top-down 2D spatial map prediction. Now, we would like to add the egocentric map prediction (pegot ) to the geocentric map from the previous time step (mt−1). In order to transform the egocentric map to the geocentric frame, we need the pose of the agent in the geocentric frame. The sensor reading x′t is typically noisy. Thus, we have a Pose Estimator to correct the sensor reading and give an accurate estimate of the agent’s geocentric pose. In order to estimate the pose of the agent, we first calculate the relative pose change (dx) from the last time step using the sensor readings at the current and last time step (x′t−1, x ′ t). Then we use a Spatial Transformation (Jaderberg et al., 2015) on the egocentric map prediction at the last frame (pegot−1) based on the relative pose change (dx), p ′ t−1 = fST (p ego t−1|dx). Note that the parameters of this Spatial Transformation are not learnt, but calculated using the pose change (dx). This transforms the projection at the last step to the current egocentric frame of reference. If the sensor was accurate, p′t−1 would highly overlap with p ego t . The Pose Estimator Unit takes in p ′ t−1 and p ego t as input and predicts the relative pose change: ˆdxt = fPE(p′t−1, p ego t |θPE) The intuition is that by looking at the egocentric predictions of the last two frames, the pose estimator can learn to predict the small translation and/or rotation that would align them better. The predicted relative pose change is then added to the last pose estimate to get the final pose estimate x̂t = x̂t−1 + ˆdxt. Finally, the egocentric spatial map prediction is transformed to the geocentric frame using the current pose prediction of the agent (x̂t) using another Spatial Transformation and aggregated with the previous spatial map (mt−1) using Channel-wise Pooling operation: mt = mt−1 + fST (pt|x̂t). Combing all the functions and transformations: mt = fMap(st, x ′ t−1:t,mt−1|θM , bt−1) = mt−1 + fST (pt|x′t + fPE(fST (p ego t−1|xt−1:t), fPr(st|θPr)|θPE)) where θPr, θPE ∈ θM , and pegot−1 ∈ bt−1 D ARCHITECTURE DETAILS We use PyTorch (Paszke et al., 2017) for implementing and training our model. The Mapper in the Neural SLAM module consists of ResNet18 convolutional layers followed by 2 fully-connected layers trained with dropout of 0.5, followed by 3 deconvolutional layers. The Pose Estimator consists of 3 convolutional layers followed by 2 fully connected layers. The Global Policy is a 5 layer fully-convolutional network, while the Local Policy consists of a 3-layer Convolutional network followed by a GRU. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. Our PPO (Schulman et al., 2017) implementation of Global and Local policy is based on Kostrikov (2018). In addition to the RGB observation, the Local policy receives relative distance and angle to short-term goal, current timestep and last action as input. We bin the relative distance (bin size increasing with distance), relative angle (5 degree bins) and current timestep (30 time step bins) before passing them through embedding layers. This kind of discretization is used previously for RL policies (Lample and Chaplot, 2017; Chaplot and Lample, 2017) and it improved the sample efficiency as compared to passing the continuous values as input directly. For fair comparison, we use the same discretization for all the baselines as well. E HYPERPARAMETER DETAILS We train all the components with 72 parallel threads, with each thread using one of the 72 scenes in the Gibson training set. This leads to a batch size of 72 for training the Neural SLAM module. The Global policy samples a new goal every 25 timesteps. We use Proximal Policy Optimization (Schulman et al., 2017) for training the Global and Local policies with 72 parallel threads and a horizon length of 25 steps for the Local policy and 20 steps for the Global policy (20 steps for Global policy is equivalent to 500 low-level timesteps as Global policy samples a new goal after every 25 timesteps). We use Adam optimizer with a learning rate of 0.0001 for training both the units in the Neural SLAM module and Adam with a learning rate of 0.00025 for training the Global and Local policies. We use a discount factor of γ = 0.99, entropy coefficient of 0.001, value loss coefficient of 0.5 for training both the Global and Local policies. Input frame size is 128× 128, the vision range for the SLAM module is V = 64, i.e. 3.2m (each cell is 5cm in length). Since there are no parameters dependent on the map size, it can be adaptive. We train with a map size of M = 960 (equivalent to 48m). A map of size 48m× 48m is large enough for all scenes in the Gibson val set. We use an adaptive map size for Pointgoal evaluation such that goal lies within central 50% of the map to handle even larger maps in the unseen test set. For the exploration task, we train and test with a constant M = 960. For the Global policy in the Exploration task, the size of the Global Policy input is G = 240. F ADDITIONAL RESULTS
1. What is the main contribution of the paper regarding visual robot navigation? 2. How does the proposed method differ from traditional planning algorithms? 3. What are the advantages and disadvantages of the proposed approach? 4. How does the mapping network learn projective geometry from data? 5. Can you explain the integration of the "handcrafted" planar front propagation into the learned framework? 6. How do the results compare to other methods in the literature, such as CMP? 7. What are some limitations of the proposed method, particularly regarding hierarchical planning? 8. Can you provide more information on the realistic sensor model used in the experiments? 9. What is the role of pose estimation in the proposed method? 10. How does the method handle unexplored obstacles during planning?
Review
Review The paper describes a method for visual robot navigation in simulated environments. In terms of overall objectives and targeted reasoning, the current approaches can be roughly divided into two groups: i) learning tasks requiring high-level reasoning for navigation involving the detection and discovery of objects and their affordances and eventually also requiring to process language input, and ii) simpler navigation task involving geometry and the detection of free (navigable space): point goal, maximizing coverage etc. The former target more complex problems but the agents are more difficult (currently up to impossible) to transfer to real environments, whereas the latter directly target problems which can currently realistically used in real world scenarios. The paper is of the second group, and addresses one of the currently investigated problems in robot navigation and mapping, namely whether learned navigation is superior to traditional planning algorithms, and whether the two different approaches can be integrated. It proposes to separate the task into long-term and short-term goals, which is not new per se, but the proposed formulation is quite interesting. In particular, the integration of the “handcrafted” planar (front propagation) into the learned framework solves a couple of issues with sample efficiency of learned methods, while still keeping some flexibility of learning over the 100% traditional approaches. I will be upfront – I already reviewed an earlier version of this paper for NeurIPS 2019, where this paper unfortunately did not pass. I was actually a favorable reviewer at this time and was defending it. The paper has been improved since and I would be happy to see it pass. I still have a couple of questions, some of which are similar to the ones I raised in the NeurIPS review (others have been addressed since). While I do agree that the targeted tasks might be considered less exciting then tasks involving high level semantics, I do also think that these tasks are far from solved as soon as we try to implement them in real life scenarios. I do think that the proposed paper is an interesting step forward. The advantages of the proposed method are “bought” with a couple of key design choices, in particular the handcrafted non-differentiable long-term path planner. The downside of this is that the loss signals can’t be backpropagated through the planner, which restricts the mapping module to very simple mapping information, basically free /navigational space. End-to-end training of navigation could in principle learn to map objects and affordances which are discovered through the task and not hardcoded or even learned with supervision, which also must be known in advance. This means that the contribution is limited to simpler navigational tasks like the tested exploration and PointGoal. In contrast, other work from the literature uses differentiable planners (eg cited CMP (Gupta et al 2017), using value Iteration Networks (cited Talmar et al. 2016) which allows fine-tuning. The mapping network, which is learned with supervision, is a general encoder-decoder network which needs to translate from projective first person views to ego-centric bird’s eye views. It thus needs to learn projective geometry from data, although projective geometry could be used as structure for the network, given camera calibration, which has been done in other work: - Chen et al., 2019 - Gupta et la., 2017 - Henriques et al., 2018 And a couple of others. Several improvements have been made since the NeurIPS submission, some of which I had addressed in my review. The experiments are quite convincing in their comparisons with the state of the art, in particular the generalization performances: - generalization from Gibson (training) to Matterport (testing) - generalization from exploration (training) to PointGoal (testing). A couple of the results have been removed from the NeurIPS submission, unfortunately, I think they should be kept in. I appreciated the realistic sensor model fitted to real data measured with a Locobot robot, and the ablation studies, which indicated the contributions of the different planner modules and of pose estimation. The role of the short term planner has been made clearer in the new paper. I found it interesting that the stellar performance at the Habitat AI challenge was removed from the new paper – this method (or at least a preceding version) won the challenge. But I do understand that this choice was motivated by some remarks of the NeurIPS fellow reviewers regarding the simplicity of the PointGoal task of the challenge. A couple of less positive aspects, and questions: On the downside, and following the remarks on literature above, I still think that the results should be compared with CMP, the main competitor of this method. I think this is the main short coming of the paper, in particular since CMP is able to perform end-to-end training because the planner is differentiable (value iteration networks, NIPS 2016). The literature w.r.t. to hierarchical planning is very far from exhaustive and lots of work is missing, consisting of recent work Embodied Question Answering, Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra, CVPR 2018 (and several follow up papers) but also quite classical work like the literature around the options framework, with the following starting point: R.S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181–211, 1999. And many other papers. The figures 1 and 2 have been completely redone, but they are not completely clear. In particular, several intermediate representations/maps/Images are not commented or labeled, they should be annotated with the symbols from the text. The role of the sensor output is not clear. Sensors normally provide relative positions … but the text seems to indicate absolute pose. Some details are lacking. In “… to predict the pose change between the two maps …” it is unclear what is done here. Is this self-supervision? The authors mention that unexplored area is considered as free space for planning. What consequences did this have in case of unexplored obstacles? I guess the problem was delegated to the local policy, which needed coping with these issues? The last paragraph before the conclusions briefly mentions experiments and comparisons but without giving any details. This is unfortunate, since there is still space available (the paper length is 8.5 pages).
ICLR
Title Learning To Explore Using Active Neural SLAM Abstract This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called ‘Active Neural SLAM’. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of CVPR 2019 Habitat PointGoal Navigation Challenge. 1 INTRODUCTION Navigation is a critical task in building intelligent agents. Navigation tasks can be expressed in many forms, for example, point goal tasks involve navigating to a specific coordinates and semantic navigation involves finding path to a specific scene or object. Irrespective of the task, a core problem for navigation in unknown environments is exploration, i.e., how to efficiently visit as much of the environment. This is useful for maximizing the coverage to give the best chance of finding the target in unknown environments or for efficiently pre-mapping environments on a limited time-budget. Recent work from Chen et al. (2019) has used end-to-end learning to tackle this problem. Their motivation is three fold: a) learning provides flexibility to the choice of input modalities (classical systems rely on observing geometry through use of specialized sensors, while learning systems can infer geometry directly from RGB images), b) use of learning can improve robustness to errors in explicit state estimation, and c) learning can effectively leverage structural regularities of the real world, leading to more efficient behavior in previously unseen environments. This lead to their design of an end-to-end trained neural network based policy that processed raw sensory observations to directly output actions that the agent should execute. While use of learning for exploration is well motivated, casting the exploration problem as an end-to-end learning problem has its own drawbacks. Learning about mapping, state-estimation and path-planning purely from data in an end-to-end manner can be prohibitively expensive. Consequently, past end-to-end learning work for exploration from Chen et al. (2019) relies on use of imitation learning and many millions of frames of experience, but still performs worse than classical methods that don’t require any training at all. This motivates our work. In this paper, we investigate alternate formulations of employing learning for exploration that retains the advantages that learning has to offer, but doesn’t suffer from the drawbacks of full-blown end-to-end learning. Our key conceptual insight is that use of learning for leveraging structural regularities of indoor environments, robustness to state-estimation errors, and †Correspondence: [email protected] ∗Equal Contribution flexibility with respect to input modalities, happens at different time scales and can thus be factored out. This motivates use of learning in a modular and hierarchical fashion inside of what one may call a ‘classical navigation pipeline’. This results in navigation policies that can work with raw sensory inputs such as RGB images, are robust to state estimation errors, and leverage regularities of real world layout. This results in extremely competitive performance over both geometry-based methods and recent learning-based methods; at the same time requiring a fraction of the number of samples. More specifically, our proposed exploration architecture comprises of a learned Neural SLAM module, a global policy, and a local policy, that are interfaced via the map and an analytical path planner. The learned Neural SLAM module produces free space maps and estimates agent pose from input RGB images and motion sensors. The global policy consumes this free-space map with agent pose and employs learning to exploit structural regularities in layout of real world environments to produce long-term goals. These long-term goals are used to generate short-term goals for the local policy (using a geometric path-planner). This local policy uses learning to directly map raw RGB images to actions that the agent should execute. Use of learning in the SLAM module provides flexibility with respect to input modality, learned global policy can exploit regularities in layout of real world layout of environments, while learned local policies can use visual feedback to exhibit more robust behaviour. At the same time, hierarchical and modular design and use of analytical planning, significantly cuts down the search space during training, leading to better performance as well as sample efficiency. We demonstrate our proposed approach in visually and physically realistic simulators for the task of geometric exploration (visit as much area as possible). We work with the Habitat simulator from Savva et al. (2019). While Habitat is already visually realistic (it uses real world scans from Chang et al. (2017) and Xia et al. (2018) as environments), we improve its physical realism by using actuation and odometry sensor noise models, that we collected by conducting physical experiments on a real mobile robot. Our experiments and ablations in this realistic simulation reveal the effectiveness of our proposed approach for the task of exploration. A straight-forward modification of our method also tackles point-goal navigation tasks, and won the AI Habitat challenge at CVPR2019 across all tracks. 2 RELATED WORK Navigation has been well studied in classical robotics. There has been a renewed interest in the use of learning to arrive at navigation policies, for a variety of tasks. Our work builds upon concepts in classical robotics and learning for navigation. We survey related works below. Navigation Approaches. Classical approaches to navigation break the problem into two parts: mapping and path planning. Mapping is done via simultaneous localization and mapping (Thrun et al., 2005; Hartley and Zisserman, 2003; Fuentes-Pacheco et al., 2015), by fusing information from multiple views of the environment. While sparse reconstruction can be done well with monocular RGB images (Mur-Artal and Tardós, 2017), dense mapping is inefficient (Newcombe et al., 2011) or requires specialized scanners such as Kinect (Izadi et al., 2011). Maps are used to compute paths to goal locations via path planning (Kavraki et al., 1996; Lavalle and Kuffner Jr, 2000; Canny, 1988). These classical methods have inspired recent learning-based techniques. Researchers have designed neural network policies that reason via spatial representations (Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Zhang et al., 2017; Henriques and Vedaldi, 2018; Gordon et al., 2018), topological representations (Savinov et al., 2018a;b), or use differentiable and trainable planners (Tamar et al., 2016; Lee et al., 2018; Gupta et al., 2017; Khan et al., 2017). Our work furthers this research, and we study a hierarchical and modular decomposition of the problem, and employ learning inside these components instead of end-to-end learning. Research also focuses on incorporating semantics in SLAM (Pronobis and Jensfelt, 2012; Walter et al., 2013). Exploration in Navigation. While a number of works focus on passive map-building, path planning and goal-driven policy learning, a much smaller body of work tackles the problem of active SLAM, i.e., how to actively control the camera for map building. We point readers to Fuentes-Pacheco et al. (2015) for a detailed survey, and summarize major themes below. Most such works frame this problem as a Partially Observable Markov Decision Process (POMDP) that are approximately solved (Martinez-Cantin et al., 2009; Kollar and Roy, 2008), and or seek to find a sequence of actions that minimizes uncertainty of maps (Stachniss et al., 2005; Carlone et al., 2014). Another line of work, explores by picking vantage points (such as on the frontier between explored and unexplored regions (Dornhege and Kleiner, 2013; Holz et al., 2010; Yamauchi, 1997; Xu et al., 2017)). Recent works from Chen et al. (2019); Savinov et al. (2018b); Fang et al. (2019) attack this problem via learning. Our proposed modular policies unify the last two lines of research, and we show improvements over representative methods from both these lines of work. Exploration has also been studied more generally in RL in the context of exploration-exploitation trade-off (Sutton and Barto, 2018; Kearns and Singh, 2002; Auer, 2002; Jaksch et al., 2010). Hierarchical and Modular Policies. Hierarchical RL (Dayan and Hinton, 1993; Sutton et al., 1999; Barto and Mahadevan, 2003) is an active area of research, aimed at automatically discovering hierarchies to speed up learning. However, this has proven to be challenging, and thus most work has resorted to using hand-defining hierarchies. For example in context of navigation, Bansal et al. (2019) and Kaufmann et al. (2019) design modular policies for navigation, that interface learned policies with low-level feedback controllers. Hierarchical and modular policies have also been used for Embodied Question Answering (Das et al., 2018a; Gordon et al., 2018; Das et al., 2018b). 3 TASK SETUP We follow the exploration task setup proposed by Chen et al. (2019) where the objective is to maximize the coverage in a fixed time budget. The coverage is defined as the total area in the map known to be traversable. Our objective is train a policy which takes in an observation st at each time step t and outputs a navigational action at to maximize the coverage. We try to make our experimental setup in simulation as realistic as possible with the goal of transferring trained policies to the real world. We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and Matterport datasets are based on real-world scene reconstructions are thus significantly more realistic than synthetic SUNCG dataset (Song et al., 2017) used for past research on exploration (Chen et al., 2019; Fang et al., 2019). In addition to synthetic scenes, prior works on learning-based navigation have also assumed simplistic agent motion. Some works limit agent motion on a grid with 90 degree rotations (Zhu et al., 2017; Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Chaplot et al., 2018). Other works which implement fine-grained control, typically assume unrealistic agent motion with no noise (Savva et al., 2019) or perfect knowledge of agent pose (Chaplot et al., 2016). Since the motion is simplistic, it becomes trivial to estimate the agent pose in most cases even if it is not assumed to be known. The reason behind these assumptions on agent motion and pose is that motion and sensor noise models are not known. In order to relax both these assumptions, we collect motion and sensor data in the real-world and implement more realistic agent motion and sensor noise models in the simulator as described in the following subsection. 3.1 ACTUATION AND SENSOR NOISE MODEL We represent the agent pose by (x, y, o) where x and y represent the xy co-ordinate of the agent measured in metres and o represents the orientation of the agent in radians (measured counterclockwise from x-axis). Without loss of generality, assume agents starts at p0 = (0, 0, 0). Now, suppose the agent takes an action at. Each action is implemented as a control command on a robot. Let the corresponding control command be ∆ua = (xa, ya, oa). Let the agent pose after the action be p1 = (x?, y?, o?). The actuation noise ( act) is the difference between the actual agent pose (p1) after the action and the intended agent pose (p0 + ∆u): act = p1 − (p0 + ∆u) = (x? − xa, y? − ya, o? − oa) Mobile robots typically have sensors which estimate the robot pose as it moves. Let the sensor estimate of the agent pose after the action be p′1 = (x ′, y′, o′). The sensor noise ( sen) is given by the difference between the sensor pose estimate (p′1) and the actual agent pose(p1): sen = p ′ 1 − p1 = (x′ − x?, y′ − y?, o′ − o?) In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a LoCoBot1 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise, making a total of 6 models. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. For each model, we collect 600 datapoints. The number of components in each Gaussian mixture model are chosen using cross-validation. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. We have released the noise models, along with their implementation in the Habitat simulator in the open-source code. 4 METHODS We propose a modular navigation model, ‘Active Neural SLAM’. It consists of three components: a Neural SLAM module, a Global policy and a Local policy as shown in Figure 1. The Neural SLAM module predicts the map of the environment and the agent pose based on the current observations and previous predictions. The Global policy uses the predicted map and agent pose to produce a long-term goal. The long-term goal is converted into a short-term goal using path planning. The Local policy takes navigational actions based on the current observation to reach the short-term goal. Map Representation. The Active Neural SLAM model internally maintains a spatial map, mt and pose of the agent xt. The spatial map, mt, is a 2×M ×M matrix where M ×M denotes the map size and each element in this spatial map corresponds to a cell of size 25cm2 (5cm × 5cm) in the physical world. Each element in the first channel denotes the probability of an obstacle at the corresponding location and each element in the second channel denotes the probability of that location being explored. A cell is considered to be explored when it is known to be free space or an obstacle. The spatial map is initialized with all zeros at the beginning of an episode,m0 = [0]2×M×M . The pose xt ∈ R3 denotes the x and y coordinates of the agent and the orientation of the agent at time t. The agent always starts at the center of the map facing east at the beginning of the episode, x0 = (M/2,M/2, 0.0). Neural SLAM Module. The Neural SLAM Module (fMap) takes in the current RGB observation, st, the current and last sensor reading of the agent pose x′t−1:t, last agent pose and map estimates, x̂t−1,mt−1 and outputs an updated map, mt, and the current agent pose estimate, x̂t, (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM ), where θM denote the trainable parameters of the Neural SLAM module. It consists of two learned components, a Mapper and a Pose Estimator. The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation. The Pose Estimator predicts the agent pose (x̂t) based on past pose estimate (x̂t−1) and last two egocentric map predictions (pegot−1:t). It essentially compares the current egocentric map prediction to the last egocentric map prediction transformed to the current frame to predict the pose change between the two maps. The egocentric map from the Mapper is transformed to a geocentric map based on the pose estimate given by the Pose Estimator and then aggregated with the previous spatial map (mt−1) to get the current map(mt). More implementation details of the Neural SLAM module are provided in the Appendix. 1http://locobot.org Global Policy. The Global Policy takes ht ∈ [0, 1]4×M×M as input, where the first two channels of ht are the spatial map mt given by the SLAM module, the third channel represents the current agent position estimated by the SLAM module, the fourth channel represents the visited locations, i.e. ∀i, j ∈ {1, 2, . . . ,m}: ht[c, i, j] = mt[c, i, j] ∀c ∈ {0, 1} ht[2, i, j] = 1 if i = x̂t[0] and j = x̂t[1] ht[3, i, j] = 1 if (i, j) ∈ [(x̂k[0], x̂k[1])]k∈{0,1,...,t} We perform two transformations before passing ht to the Global Policy model. The first transformation subsamples a window of size 4×G×G around the agent from ht. The second transformation performs max pooling operations to get an output of size 4×G×G from ht. Both the transformations are stacked to form a tensor of size 8×G×G and passed as input to the Global Policy model. The Global Policy uses a 5-layer convolutional neural network to predict a long-term goal, glt in G×G space: glt = πG(ht|θG), where θG are the parameters of the Global Policy. Planner. The Planner takes the long-term goal (glt), the spatial obstacle map (mt) and the agnet pose estimate (x̂t) as input and computes the short-term goal gst , i.e. g s t = fPlan(g l t,mt, x̂t). It computes the shortest path from the current agent location to the long-term goal (glt) using the Fast Marching Method (Sethian, 1996) based on the current spatial map mt. The unexplored area is considered as free space for planning. We compute a short-term goal coordinate (farthest point within ds(= 1.25m) from the agent) on the planned path. Local Policy. The Local Policy takes as input the current RGB observation (st) and the short-term goal (gst ) and outputs a navigational action, at = πL(st, g s t |θL), where θL are the parameters of the Local Policy. The short-term goal coordinate is transformed into relative distance and angle from the agent’s location before being passed to the Local Policy. It consists of a 3-layer convolutional neural network followed by a GRU layer. 5 EXPERIMENTAL SETUP We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and MP3D consist of scenes which are 3D reconstructions of real-world environments, however Gibson is collected using a different set of cameras, consists mostly of office spaces while MP3D consists of mostly homes with a larger average scene area. We will use Gibson as our training domain, and use MP3D for domain generalization experiments. The observation space consists of RGB images of size 3× 128× 128 and base odometry sensor readings of size 3 × 1 denoting the change in agent’s x-y coordinates and orientation. The actions space consists of three actions: move_forward, turn_left, turn_right. Both the base odometry sensor readings and the agent motion based on the actions are noisy. They are implemented using the sensor and actuation noise models based on real-world data as discussed in Section 3.1. We follow the Exploration task setup proposed by Chen et al. (2019) where the objective to maximize the coverage in a fixed time budget. Coverage is the total area in the map known to be traversable. We define a traversable point to be known if it is in the field-of-view of the agent and is less than 3m away. We use two evaluation metrics, the absolute coverage area in m2 (Cov) and percentage of area explored in the scene (% Cov), i.e. ratio of coverage to maximum possible coverage in the corresponding scene. During training, each episode lasts for a fixed length of 1000 steps. We use train/val/test splits provided by Savva et al. (2019) for both the datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server for the Pointgoal task. We use the validation as the test set for comparison and analysis for the Gibson domain. We do not use the validation set for hyper-parameter tuning. To analyze the performance of all the models with respect to the size of the scene, we split the Gibson validation set into two parts, a small set of 10 scenes with explorable area ranging from 16m2 to 36m2, and a large set of 4 scenes with traversable area ranging from 55m2 to 100m2. Note that the size of the map is usually much larger than the traversable area, with the largest map being about 23m long and 11m wide. Training Details. We train our model in the Gibson domain and transfer it to the Matterport domain. The Mapper is trained to predict egocentric projections, and the Pose Estimator is trained to predict agent pose using supervised learning. The ground truth egocentric projection is computed using geometric projections from ground truth depth. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. All the modules are trained simultaneously. Their parameters are independent, but the data distribution is inter-dependent. Based on the actions taken by the Local policy, the future input to Neural SLAM module changes, which in turn changes the map and agent pose input to the Global policy and consequently affects the short-term goal given to the Local Policy. For more architecture and hyperparameter details, please refer to the supplementary material and the open-source code. Baselines. We use a range of end-to-end Reinforcement Learning (RL) methods as baselines: RL + 3LConv: An RL Policy with 3 layer convolutional network followed by a GRU (Cho et al., 2014) as described by Savva et al. (2019) which is also identical to our Local Policy architecture. RL + Res18: A RL Policy initialized with ResNet18 (He et al., 2016) pre-trained on ImageNet followed by a GRU. RL + Res18 + AuxDepth: This baseline is adapted from Mirowski et al. (2017) who use depth prediction as an auxiliary task. We use the same architecture as our Neural SLAM module (conv layers from ResNet18) with one additional deconvolutional layer for Depth prediction followed by 3 layer convolution and GRU for the policy. RL + Res18 + ProjDepth: This baseline is adapted form Chen et al. (2019) who project the depth image in an egocentric top-down in addition to the RGB image as input to the RL policy. Since we do not have depth as input, we use the architecture from RL + Res18 + AuxDepth for depth prediction and project the predicted depth before passing to 3Layer Conv and GRU policy. For all the baselines, we also feed a 32-dimensional embedding of the sensor pose reading to the GRU along with the image-based representation. This embedding is also learnt end-to-end using RL. All baselines are trained using PPO (Schulman et al., 2017) with increase in coverage as the reward. 6 RESULTS We train the proposed ANS model and all the baselines for the Exploration task with 10 million frames on the Gibson training set. The results are shown in Table 1. The results on the Gibson Val set are averaged over a total of 994 episodes in 14 different unseen scenes. The proposed model achieves an average absolute and relative coverage of 31.379m2/0.924 as compared to 24.958m2/0.766 for the best baseline. This indicates that the proposed model is more efficient and effective at exhaustive exploration as compared to the baselines. This is because our hierarchical policy architecture reduces the horizon of the long-term exploration problem as instead of taking tens of low-level navigational actions, the Global policy only takes few long-term goal actions. We also report the domain generalization performance on the Exploration task in Table 1 (see shaded region), where all models trained on Gibson are evaluated on the Matterport domain. ANS leads to higher domain generalization performance (57.228m2/0.405 vs 41.549m2/0.297). The abosulte coverage is higher for the Matterport domain as it consists of larger scenes on average. Some visualizations of policy execution are provided in Figure 42. In Fig. 3, we plot the relative coverage (% Cov) of all the models as the episode progresses on the large and small scene sets, as well as the overall Gibson Val set. The plot on the small scene set shows that ANS is able to almost completely explore the small scenes in around 500 steps, however the baselines are only able to explore 85% of the small scenes in 1000 steps (see Fig. 3 center). This indicates that ANS explores more efficiently in small scenes. The plot on the large scenes set shows that the performance gap between ANS and baselines widens as the episode progresses (see Fig. 3 left). Looking at the behaviour of the baselines, we saw that they often got stuck in local areas. This behaviour indicates that they are unable to remember explored areas over long-time horizons and are ineffective at long-term planning. On the other hand, ANS uses a Global policy on the map which allows it to have memory of explored areas over long-time horizons, and plan effectively to reach distant long-term goals by leveraging analytical planners. As a result, it is able to explore effectively in large scenes with long episode lengths. 6.1 ABLATIONS Local Policy. An alternative to learning a Local Policy is to have a deterministic policy which follows the plan given by the Planner. As shown in Table 2, the ANS model performs much worse without the Local Policy. The Local Policy is designed to adapt to small errors in Mapping. We observed Local policy overcoming both false positives and false negatives encountered in mapping. For example, the Neural SLAM module could sometime wrongly predict a carpet as an obstacle. In this case, the planner would plan to go around the carpet. However, if the short-term goal is beyond the carpet, the Local policy can understand that the carpet is not an obstacle based on the RGB observation and learn to walk over it. Similarly, we also observed cases where the Neural SLAM module didn’t predict small obstacles very close to the agent as they were not in the field-of-view due to the height of the camera. In this case, the planner would plan a path through the obstacle where the deterministic policy would get stuck. Since the local policy is recurrent, it learns to navigate around these obstacles by getting feedback from the environment. When the policy tries to move forward but it can not, it gets feedback that there must be an obstacle. 2See https://devendrachaplot.github.io/projects/Neural-SLAM for visualization videos. Global Policy. An alternative to learning a Global Policy for sampling long-term goals is to use a classical algorithm called Frontier-based exploration (Yamauchi, 1997). A frontier is defined as the boundary between the explored free space and the unexplored space. Frontier-based exploration essentially sample points on this frontier as goals to explore the space. There are different variants of Frontier-based exploration based on the sampling strategy. Holz et al. (2010) compare different sampling strategies and find that sampling the point on the frontier closest to the agent gives the best results empirically. We implement this variant and replace it with our learned Global Policy. As shown in Table 2, Frontier-based exploration policy perform worse than the Global Policy. We observed that Frontier-based exploration spent a lot of time exploring corners or small area behind furniture. In contrast, the trained Global policy ignored small spaces and chose distant long-term goals which led to exploring more area. Pose Estimation. A difference between ANS and the baselines is that ANS uses additional supervision to train the Pose Estimator. In order to understand whether the performance gain is coming from this additional supervision, we remove the Pose Estimator from ANS and just use the input sensor reading as our pose estimate. Results in Table 2 show that the ANS still outperforms the baselines even without the Pose Estimator. Furthermore, passing the ground truth pose as input the baselines instead of the sensor reading did not improve the performance of the baselines. 6.2 REAL-WORLD TRANSFER We deploy the trained ANS policy on a Locobot in the real-world. In order to match the real-world observations to the simulator observations as closely as possible, we change the simulator input configuration to match the camera intrinsics on the Locobot. This includes the camera height and horizontal and vertical field-of-views. In Figure 5, we show an episode of ANS exploring the living area in an apartment. The figure shows that the policy transfers well to the real-world and is able to effectively explore the environment. The long-term goals sampled by the Global policy (shown by blue circles on the map) are often towards open spaces in the explored map, which indicates that it is learning to exploit the structure in the map. Please refer to the project webpage for real-world transfer videos. 6.3 POINTGOAL TASK TRANSFER. PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. In this task, each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), Success weighted by (normalized inverse) Path Length or SPL is also used as a metric for evaluation as proposed by Anderson et al. (2018). All the baseline models trained for the task of Exploration either need to be retrained or atleast finetuned to be transferred to the Pointgoal task. The modularity of ANS provides it another advantage that it can be transferred to the Pointgoal task without any additional training. For transfer to the Pointgoal task, we just fix the Global policy to always output the PointGoal coordinates as the long-term goal and use the Local and Mapper trained for the Exploration task. We found that an ANS policy trained on exploration, when transferred to the Pointgoal task performed better than several RL and Imitation Learning baselines trained on the Pointgoal task. The transferred ANS model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. The ANS model also generalized significantly better than the baselines to harder goals and to the Matterport domain. In addition to better performance, ANS was also 10 to 75 times more sample efficient than the baselines. This transferred ANS policy was also the winner of the CVPR 2019 Habitat Pointgoal Navigation Challenge for both RGB and RGB-D tracks among over 150 submissions from 16 teams. These results highlight a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. More details about the Pointgoal experiments, baselines, results including domain and goal generalization on the Pointgoal task are provided in the supplementary material. 7 CONCLUSION In this paper, we proposed a modular navigational model which leverages the strengths of classical and learning-based navigational methods. We show that the proposed model outperforms prior methods on both Exploration and PointGoal tasks and shows strong generalization across domains, goals, and tasks. In future, the proposed model can be extended to complex semantic tasks such as Semantic Goal Navigation and Embodied Question Answering by using a semantic Neural SLAM module which creates multi-channel map capturing semantic properties of the objects in the environment. The model can also be combined with prior work on Localization to relocalize in a previously created map for efficient navigation in subsequent episodes. ACKNOWLEDGEMENTS This work was supported by IARPA DIVA D17PC00340, ONR Grant N000141812861, ONR MURI, ONR Young Investigator, DARPA MCS and Apple. We would also like to acknowledge NVIDIA’s GPU support. Licenses for referenced datasets. Gibson: http://svl.stanford.edu/gibson2/assets/GDS_agreement.pdf Matterport3D: http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf A POINTGOAL EXPERIMENTS PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. We follow the PointGoal task setup from Savva et al. (2019), using train/val/test splits for both Gibson and Matterport datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server3. We report the performance of our model on the Gibson test set when submitted to the online server but also use the validation set as another test set for extensive comparison and analysis. We do not use the validation set for hyper-parameter tuning. Savva et al. (2019) identify two measures to quantify the difficulty of a PointGoal dataset. The first is the average geodesic distance (distance along the shortest path) to the goal location from the starting location of the agent, and the second is the average geodesic to Euclidean distance ratio (GED ratio). The GED ratio is always greater than or equal to 1, with higher ratio resulting in harder episodes. The train/val/test splits in Gibson dataset come from the same distribution of having similar average geodesic distance and GED ratio. In order to analyze the performance of the proposed model on out-of-set goal distribution, we create two harder sets, Hard-Dist and Hard-GEDR. In the Hard-Dist set, the geodesic distance to goal is always more than 10m and the average geodesic distance to the goal is 13.48m as compared to 6.9/6.5/7.0m in train/val/test splits (Savva et al., 2019). Hard-GEDR set consists of episodes with an average GED ratio of 2.52 and a minimum GED ratio of 2.0 as compared to average GED ratio 1.37 in the Gibson val set. We also follow the episode specification from Savva et al. (2019). Each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), we also use Success weighted by (normalized inverse) Path Length or SPL as a metric for evaluation for the PointGoal task as proposed by Anderson et al. (2018). A.1 POINTGOAL RESULTS In Table 3, we show the performance of the proposed model transferred to the PointGoal task along with the baselines trained on the PointGoal task with the same amount of data (10million frames). The proposed model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. We also report the performance of the proposed model trained from scratch on the PointGoal task for 10 million frames. The results indicate that the performance of ANS transferred from Exploration is comparable to ANS trained on PointGoal. This highlights a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. 3https://evalai.cloudcv.org/web/challenges/challenge-page/254 Sample efficiency. RL models are typically trained for more than 10 million samples. In order to compare the performance and sample-efficiency, we trained the best performing RL model (RL + Res18 + GRU + ProjDepth) for 75 million frames and it achieved a Succ/SPL of 0.678/0.486. ANS reaches the performance of 0.789/0.703 SPL/Succ at only 1 million frames. These numbers indicate that ANS achieves > 75× speedup as compared to the best RL baseline. Domain and Goal Generalization: In Table 3 (see shaded region), we evaluate all the baselines and ANS trained on the PointGoal task in the Gibson domain on the test set in Matterport domain as well as the harder goal sets in Gibson. We also transfer ANS trained on Exploration in Gibson on all the 3 sets. The results show that ANS outperforms all the baselines at all generalization sets. Interestingly, RL based methods almost fail completely on the Hard-Dist set. We also analyze the performance of the proposed model as compared to two best baselines CMP and IL + Res18 + GRU as a function of geodesic distance to goal and GED ratio in Figure 7. The performance of the baselines drops faster as compared to ANS, especially with increase in goal distance. This indicates that end-to-end learning methods are effective at short-term navigation but struggle when long-term planning is required to reach a distant goal. In Figure 8, we show some example trajectories of the ANS model along with the predicted map. The successful trajectories indicate that the model exhibits strong backtracking behavior which makes it effective at distant goals requiring long-term planning. Figure 9 visualizes a trajectory in the PointGoal task show first-person observation and corresponding map predictions. Please refer to the project webpage for visualization videos. Habitat Challenge Results. We submitted the ANS model to the CVPR 2019 Habitat Pointgoal Navigation Challenge. The results are shown in Figure 6. ANS was submitted under code-name ‘Arnold’. ANS was the winning entry for both RGB and RGB-D tracks among over 150 submissions from 16 teams, achieving an SPL of 0.805 (RGB) and 0.948 (RGB-D) on the Test Challenge set. B NOISE MODEL IMPLEMENTATION DETAILS In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a Locobot 4 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. In order to get an accurate agent pose, we use an Hokuyo UST-10LX Scanning Laser Rangefinder (LiDAR) which is especially very precise in our scenario as we take static readings in 2D (Kohlbrecher et al., 2011). We install the LiDAR on the Locobot by replacing the arm with the LiDAR. We note that the Hokuyo UST-10LX Scanning Laser Rangefinder is an expensive sensor. It costs $1600 as compared to the whole Locobot costing less than $2000 without the arm. Using expensive sensors can improve the performance of a model, however for a method to be scalable, it should ideally work with cheaper sensors too. In order demonstrate the scalability of our method, we use the LiDAR only to collect the data for building noise models and not for training or deploying navigation policies in the real-world. For the sensor estimate, we use the Kobuki base odometry available in Locobot. We approximate the LiDAR pose estimate to be the true pose of the agent as it is orders of magnitude more accurate than the base sensor. For each action, we collect 600 datapoints from both the base sensor and the LiDAR, making a total of 3600 datapoints (600 ∗ 3 ∗ 2). We use 500 datapoints for each action to fit the actuation and sensor noise models and use the remaining 100 datapoints for validation. For each action a, the LiDAR pose estimates gives us samples of p1 and the base sensor readings give us samples of p′1, i = 1, 2, . . . , 600. The difference between LiDAR estimates (p i 1) and control command (∆ua) gives us samples for the actuation noise for the action a: iact,a = p i 1 −∆ua and difference between base sensor readings and LiDAR estimates gives us the samples for the sensor noise, isen,a = p i′ 1 − pi1. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise using samples iact,a and i sen,a respectively, making a total of 6 models. We fit Gaussian mixture models with number of components ranging from 1 to 20 for and pick the model with highest likelihood on the validation set. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. 4http://locobot.org C NEURAL SLAM MODULE IMPLEMENTATION DETAILS The Neural SLAM module (fMap) takes in the current RGB observation, st ∈ R3×H×W , the current and last sensor reading of the agent pose x′t−1:t and the map at the previous time step mt−1 ∈ R2×M×M and outputs an updated map, mt ∈ R2×M×M (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM , bt−1) where θM denote the trainable parameters and pt−1 denotes internal representations of the Neural SLAM module. The Neural SLAM module can be broken down into two parts, a Mapper (fPr) and a Pose Estimator Unit (fPE ,). The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation: pegot = fPr(st|θPr), where θPr are the parameters of the Mapper. It consists of Resnet18 convolutional layers to produce an embedding of the observation. This embedding is passed through two fully-connected layers followed by 3 deconvolutional layers to get the first-person top-down 2D spatial map prediction. Now, we would like to add the egocentric map prediction (pegot ) to the geocentric map from the previous time step (mt−1). In order to transform the egocentric map to the geocentric frame, we need the pose of the agent in the geocentric frame. The sensor reading x′t is typically noisy. Thus, we have a Pose Estimator to correct the sensor reading and give an accurate estimate of the agent’s geocentric pose. In order to estimate the pose of the agent, we first calculate the relative pose change (dx) from the last time step using the sensor readings at the current and last time step (x′t−1, x ′ t). Then we use a Spatial Transformation (Jaderberg et al., 2015) on the egocentric map prediction at the last frame (pegot−1) based on the relative pose change (dx), p ′ t−1 = fST (p ego t−1|dx). Note that the parameters of this Spatial Transformation are not learnt, but calculated using the pose change (dx). This transforms the projection at the last step to the current egocentric frame of reference. If the sensor was accurate, p′t−1 would highly overlap with p ego t . The Pose Estimator Unit takes in p ′ t−1 and p ego t as input and predicts the relative pose change: ˆdxt = fPE(p′t−1, p ego t |θPE) The intuition is that by looking at the egocentric predictions of the last two frames, the pose estimator can learn to predict the small translation and/or rotation that would align them better. The predicted relative pose change is then added to the last pose estimate to get the final pose estimate x̂t = x̂t−1 + ˆdxt. Finally, the egocentric spatial map prediction is transformed to the geocentric frame using the current pose prediction of the agent (x̂t) using another Spatial Transformation and aggregated with the previous spatial map (mt−1) using Channel-wise Pooling operation: mt = mt−1 + fST (pt|x̂t). Combing all the functions and transformations: mt = fMap(st, x ′ t−1:t,mt−1|θM , bt−1) = mt−1 + fST (pt|x′t + fPE(fST (p ego t−1|xt−1:t), fPr(st|θPr)|θPE)) where θPr, θPE ∈ θM , and pegot−1 ∈ bt−1 D ARCHITECTURE DETAILS We use PyTorch (Paszke et al., 2017) for implementing and training our model. The Mapper in the Neural SLAM module consists of ResNet18 convolutional layers followed by 2 fully-connected layers trained with dropout of 0.5, followed by 3 deconvolutional layers. The Pose Estimator consists of 3 convolutional layers followed by 2 fully connected layers. The Global Policy is a 5 layer fully-convolutional network, while the Local Policy consists of a 3-layer Convolutional network followed by a GRU. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. Our PPO (Schulman et al., 2017) implementation of Global and Local policy is based on Kostrikov (2018). In addition to the RGB observation, the Local policy receives relative distance and angle to short-term goal, current timestep and last action as input. We bin the relative distance (bin size increasing with distance), relative angle (5 degree bins) and current timestep (30 time step bins) before passing them through embedding layers. This kind of discretization is used previously for RL policies (Lample and Chaplot, 2017; Chaplot and Lample, 2017) and it improved the sample efficiency as compared to passing the continuous values as input directly. For fair comparison, we use the same discretization for all the baselines as well. E HYPERPARAMETER DETAILS We train all the components with 72 parallel threads, with each thread using one of the 72 scenes in the Gibson training set. This leads to a batch size of 72 for training the Neural SLAM module. The Global policy samples a new goal every 25 timesteps. We use Proximal Policy Optimization (Schulman et al., 2017) for training the Global and Local policies with 72 parallel threads and a horizon length of 25 steps for the Local policy and 20 steps for the Global policy (20 steps for Global policy is equivalent to 500 low-level timesteps as Global policy samples a new goal after every 25 timesteps). We use Adam optimizer with a learning rate of 0.0001 for training both the units in the Neural SLAM module and Adam with a learning rate of 0.00025 for training the Global and Local policies. We use a discount factor of γ = 0.99, entropy coefficient of 0.001, value loss coefficient of 0.5 for training both the Global and Local policies. Input frame size is 128× 128, the vision range for the SLAM module is V = 64, i.e. 3.2m (each cell is 5cm in length). Since there are no parameters dependent on the map size, it can be adaptive. We train with a map size of M = 960 (equivalent to 48m). A map of size 48m× 48m is large enough for all scenes in the Gibson val set. We use an adaptive map size for Pointgoal evaluation such that goal lies within central 50% of the map to handle even larger maps in the unseen test set. For the exploration task, we train and test with a constant M = 960. For the Global policy in the Exploration task, the size of the Global Policy input is G = 240. F ADDITIONAL RESULTS
1. What is the main contribution of the paper regarding coverage maximization? 2. What are the concerns regarding the complexity of the proposed architecture and policy? 3. How does the reviewer assess the principled exploration strategy employed by the authors? 4. Are there any issues with the citations used in the paper, particularly regarding exploration in reinforcement learning? 5. What is unclear regarding the generation and training of long-term goals? 6. Are there any typos or language errors in the review that need correction?
Review
Review This paper proposes a new architecture and policy for coverage maximization (which the authors call exploration). Overall the paper is well written, but I have some major concerns. However I am not an expert in navigation / robotics so i have given myself the lowest confidence for this paper. My highest level concern is that this approach seems extremely complicated (eg Figs 1 and 2), as well as employing several sub-algorithms as part of the procedure (eg Fast Marching Method). It's not clear to me why any of the components are necessary, though I do appreciate the ablation study. But even within that ablation not all components are ablated (e.g., why GRU units?). My experience suggests that extremely complicated architectures such as this one are brittle and don't generalize (and it goes against Sutton's 'bitter lesson'). The fact that the experiments are so small does not help. Perhaps more challenging domains would yield negative results. Further, how tuned are the baselines? And it seems that the baselines are general RL agents and not optimized for coverage maximization like this architecture. The authors say " We will also open-source the code", has this been done? Open-sourcing would help others reproduce the results since as it stands I think this is too complicated to be reproduced. The level of intricacy makes me think that perhaps this paper is more suited to a robotics conference. Secondly, the paper mentions exploration a lot, but it's not clear to me how this is a principled exploration strategy. Exploration is not in fact defined as "visit as much area as possible" or "maximize the coverage in a fixed time budget", as the authors suggest. In fact the sentences "We follow the exploration task setup proposed by Chen et al. 2019 where the objective is to maximize the coverage in a fixed time budget. [The] coverage is defined as the total area in the map known to be traversable" appears twice in this manuscript. Exploration is better defined within the context of the explore-exploit tradeoff, whereby an agent must sometimes take sub-optimal actions in order to learn more about the environment in the hope of possibly increasing it's long-term return. Conflating 'coverage-maximization' and exploration is confusing. I think the paper should be rewritten to de-emphasize exploration and instead talk about coverage-maximization, which is more accurate. "Exploration has also been studied more generally in RL for faster training (Schmidhuber, 1991)." I certainly would *not* cite Schmidhuber 91 as the canonical reference of exploration in RL. Far, far, more appropriate would be either the Sutton+Barto RL book (which doesn't do a great job covering exploration but is at least a decent overall reference) or the works of Auer 2002 and Jaksch et al 2010, and related papers. The Schmidhuber citation should be removed and replaced with a few that actually make sense in this context. I don't understand how the goals (especially long-term) are generated and trained. Is the long-term goal trained using the reward signal? This is not properly explained. "and summarize major these below" typo, probably should be themes or theses? "agnet pose" typo.
ICLR
Title Learning To Explore Using Active Neural SLAM Abstract This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called ‘Active Neural SLAM’. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of CVPR 2019 Habitat PointGoal Navigation Challenge. 1 INTRODUCTION Navigation is a critical task in building intelligent agents. Navigation tasks can be expressed in many forms, for example, point goal tasks involve navigating to a specific coordinates and semantic navigation involves finding path to a specific scene or object. Irrespective of the task, a core problem for navigation in unknown environments is exploration, i.e., how to efficiently visit as much of the environment. This is useful for maximizing the coverage to give the best chance of finding the target in unknown environments or for efficiently pre-mapping environments on a limited time-budget. Recent work from Chen et al. (2019) has used end-to-end learning to tackle this problem. Their motivation is three fold: a) learning provides flexibility to the choice of input modalities (classical systems rely on observing geometry through use of specialized sensors, while learning systems can infer geometry directly from RGB images), b) use of learning can improve robustness to errors in explicit state estimation, and c) learning can effectively leverage structural regularities of the real world, leading to more efficient behavior in previously unseen environments. This lead to their design of an end-to-end trained neural network based policy that processed raw sensory observations to directly output actions that the agent should execute. While use of learning for exploration is well motivated, casting the exploration problem as an end-to-end learning problem has its own drawbacks. Learning about mapping, state-estimation and path-planning purely from data in an end-to-end manner can be prohibitively expensive. Consequently, past end-to-end learning work for exploration from Chen et al. (2019) relies on use of imitation learning and many millions of frames of experience, but still performs worse than classical methods that don’t require any training at all. This motivates our work. In this paper, we investigate alternate formulations of employing learning for exploration that retains the advantages that learning has to offer, but doesn’t suffer from the drawbacks of full-blown end-to-end learning. Our key conceptual insight is that use of learning for leveraging structural regularities of indoor environments, robustness to state-estimation errors, and †Correspondence: [email protected] ∗Equal Contribution flexibility with respect to input modalities, happens at different time scales and can thus be factored out. This motivates use of learning in a modular and hierarchical fashion inside of what one may call a ‘classical navigation pipeline’. This results in navigation policies that can work with raw sensory inputs such as RGB images, are robust to state estimation errors, and leverage regularities of real world layout. This results in extremely competitive performance over both geometry-based methods and recent learning-based methods; at the same time requiring a fraction of the number of samples. More specifically, our proposed exploration architecture comprises of a learned Neural SLAM module, a global policy, and a local policy, that are interfaced via the map and an analytical path planner. The learned Neural SLAM module produces free space maps and estimates agent pose from input RGB images and motion sensors. The global policy consumes this free-space map with agent pose and employs learning to exploit structural regularities in layout of real world environments to produce long-term goals. These long-term goals are used to generate short-term goals for the local policy (using a geometric path-planner). This local policy uses learning to directly map raw RGB images to actions that the agent should execute. Use of learning in the SLAM module provides flexibility with respect to input modality, learned global policy can exploit regularities in layout of real world layout of environments, while learned local policies can use visual feedback to exhibit more robust behaviour. At the same time, hierarchical and modular design and use of analytical planning, significantly cuts down the search space during training, leading to better performance as well as sample efficiency. We demonstrate our proposed approach in visually and physically realistic simulators for the task of geometric exploration (visit as much area as possible). We work with the Habitat simulator from Savva et al. (2019). While Habitat is already visually realistic (it uses real world scans from Chang et al. (2017) and Xia et al. (2018) as environments), we improve its physical realism by using actuation and odometry sensor noise models, that we collected by conducting physical experiments on a real mobile robot. Our experiments and ablations in this realistic simulation reveal the effectiveness of our proposed approach for the task of exploration. A straight-forward modification of our method also tackles point-goal navigation tasks, and won the AI Habitat challenge at CVPR2019 across all tracks. 2 RELATED WORK Navigation has been well studied in classical robotics. There has been a renewed interest in the use of learning to arrive at navigation policies, for a variety of tasks. Our work builds upon concepts in classical robotics and learning for navigation. We survey related works below. Navigation Approaches. Classical approaches to navigation break the problem into two parts: mapping and path planning. Mapping is done via simultaneous localization and mapping (Thrun et al., 2005; Hartley and Zisserman, 2003; Fuentes-Pacheco et al., 2015), by fusing information from multiple views of the environment. While sparse reconstruction can be done well with monocular RGB images (Mur-Artal and Tardós, 2017), dense mapping is inefficient (Newcombe et al., 2011) or requires specialized scanners such as Kinect (Izadi et al., 2011). Maps are used to compute paths to goal locations via path planning (Kavraki et al., 1996; Lavalle and Kuffner Jr, 2000; Canny, 1988). These classical methods have inspired recent learning-based techniques. Researchers have designed neural network policies that reason via spatial representations (Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Zhang et al., 2017; Henriques and Vedaldi, 2018; Gordon et al., 2018), topological representations (Savinov et al., 2018a;b), or use differentiable and trainable planners (Tamar et al., 2016; Lee et al., 2018; Gupta et al., 2017; Khan et al., 2017). Our work furthers this research, and we study a hierarchical and modular decomposition of the problem, and employ learning inside these components instead of end-to-end learning. Research also focuses on incorporating semantics in SLAM (Pronobis and Jensfelt, 2012; Walter et al., 2013). Exploration in Navigation. While a number of works focus on passive map-building, path planning and goal-driven policy learning, a much smaller body of work tackles the problem of active SLAM, i.e., how to actively control the camera for map building. We point readers to Fuentes-Pacheco et al. (2015) for a detailed survey, and summarize major themes below. Most such works frame this problem as a Partially Observable Markov Decision Process (POMDP) that are approximately solved (Martinez-Cantin et al., 2009; Kollar and Roy, 2008), and or seek to find a sequence of actions that minimizes uncertainty of maps (Stachniss et al., 2005; Carlone et al., 2014). Another line of work, explores by picking vantage points (such as on the frontier between explored and unexplored regions (Dornhege and Kleiner, 2013; Holz et al., 2010; Yamauchi, 1997; Xu et al., 2017)). Recent works from Chen et al. (2019); Savinov et al. (2018b); Fang et al. (2019) attack this problem via learning. Our proposed modular policies unify the last two lines of research, and we show improvements over representative methods from both these lines of work. Exploration has also been studied more generally in RL in the context of exploration-exploitation trade-off (Sutton and Barto, 2018; Kearns and Singh, 2002; Auer, 2002; Jaksch et al., 2010). Hierarchical and Modular Policies. Hierarchical RL (Dayan and Hinton, 1993; Sutton et al., 1999; Barto and Mahadevan, 2003) is an active area of research, aimed at automatically discovering hierarchies to speed up learning. However, this has proven to be challenging, and thus most work has resorted to using hand-defining hierarchies. For example in context of navigation, Bansal et al. (2019) and Kaufmann et al. (2019) design modular policies for navigation, that interface learned policies with low-level feedback controllers. Hierarchical and modular policies have also been used for Embodied Question Answering (Das et al., 2018a; Gordon et al., 2018; Das et al., 2018b). 3 TASK SETUP We follow the exploration task setup proposed by Chen et al. (2019) where the objective is to maximize the coverage in a fixed time budget. The coverage is defined as the total area in the map known to be traversable. Our objective is train a policy which takes in an observation st at each time step t and outputs a navigational action at to maximize the coverage. We try to make our experimental setup in simulation as realistic as possible with the goal of transferring trained policies to the real world. We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and Matterport datasets are based on real-world scene reconstructions are thus significantly more realistic than synthetic SUNCG dataset (Song et al., 2017) used for past research on exploration (Chen et al., 2019; Fang et al., 2019). In addition to synthetic scenes, prior works on learning-based navigation have also assumed simplistic agent motion. Some works limit agent motion on a grid with 90 degree rotations (Zhu et al., 2017; Gupta et al., 2017; Parisotto and Salakhutdinov, 2018; Chaplot et al., 2018). Other works which implement fine-grained control, typically assume unrealistic agent motion with no noise (Savva et al., 2019) or perfect knowledge of agent pose (Chaplot et al., 2016). Since the motion is simplistic, it becomes trivial to estimate the agent pose in most cases even if it is not assumed to be known. The reason behind these assumptions on agent motion and pose is that motion and sensor noise models are not known. In order to relax both these assumptions, we collect motion and sensor data in the real-world and implement more realistic agent motion and sensor noise models in the simulator as described in the following subsection. 3.1 ACTUATION AND SENSOR NOISE MODEL We represent the agent pose by (x, y, o) where x and y represent the xy co-ordinate of the agent measured in metres and o represents the orientation of the agent in radians (measured counterclockwise from x-axis). Without loss of generality, assume agents starts at p0 = (0, 0, 0). Now, suppose the agent takes an action at. Each action is implemented as a control command on a robot. Let the corresponding control command be ∆ua = (xa, ya, oa). Let the agent pose after the action be p1 = (x?, y?, o?). The actuation noise ( act) is the difference between the actual agent pose (p1) after the action and the intended agent pose (p0 + ∆u): act = p1 − (p0 + ∆u) = (x? − xa, y? − ya, o? − oa) Mobile robots typically have sensors which estimate the robot pose as it moves. Let the sensor estimate of the agent pose after the action be p′1 = (x ′, y′, o′). The sensor noise ( sen) is given by the difference between the sensor pose estimate (p′1) and the actual agent pose(p1): sen = p ′ 1 − p1 = (x′ − x?, y′ − y?, o′ − o?) In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a LoCoBot1 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise, making a total of 6 models. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. For each model, we collect 600 datapoints. The number of components in each Gaussian mixture model are chosen using cross-validation. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. We have released the noise models, along with their implementation in the Habitat simulator in the open-source code. 4 METHODS We propose a modular navigation model, ‘Active Neural SLAM’. It consists of three components: a Neural SLAM module, a Global policy and a Local policy as shown in Figure 1. The Neural SLAM module predicts the map of the environment and the agent pose based on the current observations and previous predictions. The Global policy uses the predicted map and agent pose to produce a long-term goal. The long-term goal is converted into a short-term goal using path planning. The Local policy takes navigational actions based on the current observation to reach the short-term goal. Map Representation. The Active Neural SLAM model internally maintains a spatial map, mt and pose of the agent xt. The spatial map, mt, is a 2×M ×M matrix where M ×M denotes the map size and each element in this spatial map corresponds to a cell of size 25cm2 (5cm × 5cm) in the physical world. Each element in the first channel denotes the probability of an obstacle at the corresponding location and each element in the second channel denotes the probability of that location being explored. A cell is considered to be explored when it is known to be free space or an obstacle. The spatial map is initialized with all zeros at the beginning of an episode,m0 = [0]2×M×M . The pose xt ∈ R3 denotes the x and y coordinates of the agent and the orientation of the agent at time t. The agent always starts at the center of the map facing east at the beginning of the episode, x0 = (M/2,M/2, 0.0). Neural SLAM Module. The Neural SLAM Module (fMap) takes in the current RGB observation, st, the current and last sensor reading of the agent pose x′t−1:t, last agent pose and map estimates, x̂t−1,mt−1 and outputs an updated map, mt, and the current agent pose estimate, x̂t, (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM ), where θM denote the trainable parameters of the Neural SLAM module. It consists of two learned components, a Mapper and a Pose Estimator. The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation. The Pose Estimator predicts the agent pose (x̂t) based on past pose estimate (x̂t−1) and last two egocentric map predictions (pegot−1:t). It essentially compares the current egocentric map prediction to the last egocentric map prediction transformed to the current frame to predict the pose change between the two maps. The egocentric map from the Mapper is transformed to a geocentric map based on the pose estimate given by the Pose Estimator and then aggregated with the previous spatial map (mt−1) to get the current map(mt). More implementation details of the Neural SLAM module are provided in the Appendix. 1http://locobot.org Global Policy. The Global Policy takes ht ∈ [0, 1]4×M×M as input, where the first two channels of ht are the spatial map mt given by the SLAM module, the third channel represents the current agent position estimated by the SLAM module, the fourth channel represents the visited locations, i.e. ∀i, j ∈ {1, 2, . . . ,m}: ht[c, i, j] = mt[c, i, j] ∀c ∈ {0, 1} ht[2, i, j] = 1 if i = x̂t[0] and j = x̂t[1] ht[3, i, j] = 1 if (i, j) ∈ [(x̂k[0], x̂k[1])]k∈{0,1,...,t} We perform two transformations before passing ht to the Global Policy model. The first transformation subsamples a window of size 4×G×G around the agent from ht. The second transformation performs max pooling operations to get an output of size 4×G×G from ht. Both the transformations are stacked to form a tensor of size 8×G×G and passed as input to the Global Policy model. The Global Policy uses a 5-layer convolutional neural network to predict a long-term goal, glt in G×G space: glt = πG(ht|θG), where θG are the parameters of the Global Policy. Planner. The Planner takes the long-term goal (glt), the spatial obstacle map (mt) and the agnet pose estimate (x̂t) as input and computes the short-term goal gst , i.e. g s t = fPlan(g l t,mt, x̂t). It computes the shortest path from the current agent location to the long-term goal (glt) using the Fast Marching Method (Sethian, 1996) based on the current spatial map mt. The unexplored area is considered as free space for planning. We compute a short-term goal coordinate (farthest point within ds(= 1.25m) from the agent) on the planned path. Local Policy. The Local Policy takes as input the current RGB observation (st) and the short-term goal (gst ) and outputs a navigational action, at = πL(st, g s t |θL), where θL are the parameters of the Local Policy. The short-term goal coordinate is transformed into relative distance and angle from the agent’s location before being passed to the Local Policy. It consists of a 3-layer convolutional neural network followed by a GRU layer. 5 EXPERIMENTAL SETUP We use the Habitat simulator (Savva et al., 2019) with the Gibson (Xia et al., 2018) and Matterport (MP3D) (Chang et al., 2017) datasets for our experiments. Both Gibson and MP3D consist of scenes which are 3D reconstructions of real-world environments, however Gibson is collected using a different set of cameras, consists mostly of office spaces while MP3D consists of mostly homes with a larger average scene area. We will use Gibson as our training domain, and use MP3D for domain generalization experiments. The observation space consists of RGB images of size 3× 128× 128 and base odometry sensor readings of size 3 × 1 denoting the change in agent’s x-y coordinates and orientation. The actions space consists of three actions: move_forward, turn_left, turn_right. Both the base odometry sensor readings and the agent motion based on the actions are noisy. They are implemented using the sensor and actuation noise models based on real-world data as discussed in Section 3.1. We follow the Exploration task setup proposed by Chen et al. (2019) where the objective to maximize the coverage in a fixed time budget. Coverage is the total area in the map known to be traversable. We define a traversable point to be known if it is in the field-of-view of the agent and is less than 3m away. We use two evaluation metrics, the absolute coverage area in m2 (Cov) and percentage of area explored in the scene (% Cov), i.e. ratio of coverage to maximum possible coverage in the corresponding scene. During training, each episode lasts for a fixed length of 1000 steps. We use train/val/test splits provided by Savva et al. (2019) for both the datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server for the Pointgoal task. We use the validation as the test set for comparison and analysis for the Gibson domain. We do not use the validation set for hyper-parameter tuning. To analyze the performance of all the models with respect to the size of the scene, we split the Gibson validation set into two parts, a small set of 10 scenes with explorable area ranging from 16m2 to 36m2, and a large set of 4 scenes with traversable area ranging from 55m2 to 100m2. Note that the size of the map is usually much larger than the traversable area, with the largest map being about 23m long and 11m wide. Training Details. We train our model in the Gibson domain and transfer it to the Matterport domain. The Mapper is trained to predict egocentric projections, and the Pose Estimator is trained to predict agent pose using supervised learning. The ground truth egocentric projection is computed using geometric projections from ground truth depth. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. All the modules are trained simultaneously. Their parameters are independent, but the data distribution is inter-dependent. Based on the actions taken by the Local policy, the future input to Neural SLAM module changes, which in turn changes the map and agent pose input to the Global policy and consequently affects the short-term goal given to the Local Policy. For more architecture and hyperparameter details, please refer to the supplementary material and the open-source code. Baselines. We use a range of end-to-end Reinforcement Learning (RL) methods as baselines: RL + 3LConv: An RL Policy with 3 layer convolutional network followed by a GRU (Cho et al., 2014) as described by Savva et al. (2019) which is also identical to our Local Policy architecture. RL + Res18: A RL Policy initialized with ResNet18 (He et al., 2016) pre-trained on ImageNet followed by a GRU. RL + Res18 + AuxDepth: This baseline is adapted from Mirowski et al. (2017) who use depth prediction as an auxiliary task. We use the same architecture as our Neural SLAM module (conv layers from ResNet18) with one additional deconvolutional layer for Depth prediction followed by 3 layer convolution and GRU for the policy. RL + Res18 + ProjDepth: This baseline is adapted form Chen et al. (2019) who project the depth image in an egocentric top-down in addition to the RGB image as input to the RL policy. Since we do not have depth as input, we use the architecture from RL + Res18 + AuxDepth for depth prediction and project the predicted depth before passing to 3Layer Conv and GRU policy. For all the baselines, we also feed a 32-dimensional embedding of the sensor pose reading to the GRU along with the image-based representation. This embedding is also learnt end-to-end using RL. All baselines are trained using PPO (Schulman et al., 2017) with increase in coverage as the reward. 6 RESULTS We train the proposed ANS model and all the baselines for the Exploration task with 10 million frames on the Gibson training set. The results are shown in Table 1. The results on the Gibson Val set are averaged over a total of 994 episodes in 14 different unseen scenes. The proposed model achieves an average absolute and relative coverage of 31.379m2/0.924 as compared to 24.958m2/0.766 for the best baseline. This indicates that the proposed model is more efficient and effective at exhaustive exploration as compared to the baselines. This is because our hierarchical policy architecture reduces the horizon of the long-term exploration problem as instead of taking tens of low-level navigational actions, the Global policy only takes few long-term goal actions. We also report the domain generalization performance on the Exploration task in Table 1 (see shaded region), where all models trained on Gibson are evaluated on the Matterport domain. ANS leads to higher domain generalization performance (57.228m2/0.405 vs 41.549m2/0.297). The abosulte coverage is higher for the Matterport domain as it consists of larger scenes on average. Some visualizations of policy execution are provided in Figure 42. In Fig. 3, we plot the relative coverage (% Cov) of all the models as the episode progresses on the large and small scene sets, as well as the overall Gibson Val set. The plot on the small scene set shows that ANS is able to almost completely explore the small scenes in around 500 steps, however the baselines are only able to explore 85% of the small scenes in 1000 steps (see Fig. 3 center). This indicates that ANS explores more efficiently in small scenes. The plot on the large scenes set shows that the performance gap between ANS and baselines widens as the episode progresses (see Fig. 3 left). Looking at the behaviour of the baselines, we saw that they often got stuck in local areas. This behaviour indicates that they are unable to remember explored areas over long-time horizons and are ineffective at long-term planning. On the other hand, ANS uses a Global policy on the map which allows it to have memory of explored areas over long-time horizons, and plan effectively to reach distant long-term goals by leveraging analytical planners. As a result, it is able to explore effectively in large scenes with long episode lengths. 6.1 ABLATIONS Local Policy. An alternative to learning a Local Policy is to have a deterministic policy which follows the plan given by the Planner. As shown in Table 2, the ANS model performs much worse without the Local Policy. The Local Policy is designed to adapt to small errors in Mapping. We observed Local policy overcoming both false positives and false negatives encountered in mapping. For example, the Neural SLAM module could sometime wrongly predict a carpet as an obstacle. In this case, the planner would plan to go around the carpet. However, if the short-term goal is beyond the carpet, the Local policy can understand that the carpet is not an obstacle based on the RGB observation and learn to walk over it. Similarly, we also observed cases where the Neural SLAM module didn’t predict small obstacles very close to the agent as they were not in the field-of-view due to the height of the camera. In this case, the planner would plan a path through the obstacle where the deterministic policy would get stuck. Since the local policy is recurrent, it learns to navigate around these obstacles by getting feedback from the environment. When the policy tries to move forward but it can not, it gets feedback that there must be an obstacle. 2See https://devendrachaplot.github.io/projects/Neural-SLAM for visualization videos. Global Policy. An alternative to learning a Global Policy for sampling long-term goals is to use a classical algorithm called Frontier-based exploration (Yamauchi, 1997). A frontier is defined as the boundary between the explored free space and the unexplored space. Frontier-based exploration essentially sample points on this frontier as goals to explore the space. There are different variants of Frontier-based exploration based on the sampling strategy. Holz et al. (2010) compare different sampling strategies and find that sampling the point on the frontier closest to the agent gives the best results empirically. We implement this variant and replace it with our learned Global Policy. As shown in Table 2, Frontier-based exploration policy perform worse than the Global Policy. We observed that Frontier-based exploration spent a lot of time exploring corners or small area behind furniture. In contrast, the trained Global policy ignored small spaces and chose distant long-term goals which led to exploring more area. Pose Estimation. A difference between ANS and the baselines is that ANS uses additional supervision to train the Pose Estimator. In order to understand whether the performance gain is coming from this additional supervision, we remove the Pose Estimator from ANS and just use the input sensor reading as our pose estimate. Results in Table 2 show that the ANS still outperforms the baselines even without the Pose Estimator. Furthermore, passing the ground truth pose as input the baselines instead of the sensor reading did not improve the performance of the baselines. 6.2 REAL-WORLD TRANSFER We deploy the trained ANS policy on a Locobot in the real-world. In order to match the real-world observations to the simulator observations as closely as possible, we change the simulator input configuration to match the camera intrinsics on the Locobot. This includes the camera height and horizontal and vertical field-of-views. In Figure 5, we show an episode of ANS exploring the living area in an apartment. The figure shows that the policy transfers well to the real-world and is able to effectively explore the environment. The long-term goals sampled by the Global policy (shown by blue circles on the map) are often towards open spaces in the explored map, which indicates that it is learning to exploit the structure in the map. Please refer to the project webpage for real-world transfer videos. 6.3 POINTGOAL TASK TRANSFER. PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. In this task, each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), Success weighted by (normalized inverse) Path Length or SPL is also used as a metric for evaluation as proposed by Anderson et al. (2018). All the baseline models trained for the task of Exploration either need to be retrained or atleast finetuned to be transferred to the Pointgoal task. The modularity of ANS provides it another advantage that it can be transferred to the Pointgoal task without any additional training. For transfer to the Pointgoal task, we just fix the Global policy to always output the PointGoal coordinates as the long-term goal and use the Local and Mapper trained for the Exploration task. We found that an ANS policy trained on exploration, when transferred to the Pointgoal task performed better than several RL and Imitation Learning baselines trained on the Pointgoal task. The transferred ANS model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. The ANS model also generalized significantly better than the baselines to harder goals and to the Matterport domain. In addition to better performance, ANS was also 10 to 75 times more sample efficient than the baselines. This transferred ANS policy was also the winner of the CVPR 2019 Habitat Pointgoal Navigation Challenge for both RGB and RGB-D tracks among over 150 submissions from 16 teams. These results highlight a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. More details about the Pointgoal experiments, baselines, results including domain and goal generalization on the Pointgoal task are provided in the supplementary material. 7 CONCLUSION In this paper, we proposed a modular navigational model which leverages the strengths of classical and learning-based navigational methods. We show that the proposed model outperforms prior methods on both Exploration and PointGoal tasks and shows strong generalization across domains, goals, and tasks. In future, the proposed model can be extended to complex semantic tasks such as Semantic Goal Navigation and Embodied Question Answering by using a semantic Neural SLAM module which creates multi-channel map capturing semantic properties of the objects in the environment. The model can also be combined with prior work on Localization to relocalize in a previously created map for efficient navigation in subsequent episodes. ACKNOWLEDGEMENTS This work was supported by IARPA DIVA D17PC00340, ONR Grant N000141812861, ONR MURI, ONR Young Investigator, DARPA MCS and Apple. We would also like to acknowledge NVIDIA’s GPU support. Licenses for referenced datasets. Gibson: http://svl.stanford.edu/gibson2/assets/GDS_agreement.pdf Matterport3D: http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf A POINTGOAL EXPERIMENTS PointGoal has been the most studied task in recent literature on navigation where the objective is to navigate to a goal location whose relative coordinates are given as input in a limited time budget. We follow the PointGoal task setup from Savva et al. (2019), using train/val/test splits for both Gibson and Matterport datasets. Note that the set of scenes used in each split is disjoint, which means the agent is tested on new scenes never seen during training. Gibson test set is not public but rather held out on an online evaluation server3. We report the performance of our model on the Gibson test set when submitted to the online server but also use the validation set as another test set for extensive comparison and analysis. We do not use the validation set for hyper-parameter tuning. Savva et al. (2019) identify two measures to quantify the difficulty of a PointGoal dataset. The first is the average geodesic distance (distance along the shortest path) to the goal location from the starting location of the agent, and the second is the average geodesic to Euclidean distance ratio (GED ratio). The GED ratio is always greater than or equal to 1, with higher ratio resulting in harder episodes. The train/val/test splits in Gibson dataset come from the same distribution of having similar average geodesic distance and GED ratio. In order to analyze the performance of the proposed model on out-of-set goal distribution, we create two harder sets, Hard-Dist and Hard-GEDR. In the Hard-Dist set, the geodesic distance to goal is always more than 10m and the average geodesic distance to the goal is 13.48m as compared to 6.9/6.5/7.0m in train/val/test splits (Savva et al., 2019). Hard-GEDR set consists of episodes with an average GED ratio of 2.52 and a minimum GED ratio of 2.0 as compared to average GED ratio 1.37 in the Gibson val set. We also follow the episode specification from Savva et al. (2019). Each episode ends when either the agent takes the stop action or at a maximum of 500 timesteps. An episode is considered a success when the final position of the agent is within 0.2m of the goal location. In addition to Success rate (Succ), we also use Success weighted by (normalized inverse) Path Length or SPL as a metric for evaluation for the PointGoal task as proposed by Anderson et al. (2018). A.1 POINTGOAL RESULTS In Table 3, we show the performance of the proposed model transferred to the PointGoal task along with the baselines trained on the PointGoal task with the same amount of data (10million frames). The proposed model achieves a success rate/SPL of 0.950/0.846 as compared to 0.827/0.730 for the best baseline model on Gibson val set. We also report the performance of the proposed model trained from scratch on the PointGoal task for 10 million frames. The results indicate that the performance of ANS transferred from Exploration is comparable to ANS trained on PointGoal. This highlights a key advantage of our model that it allows us to transfer the knowledge of obstacle avoidance and control in low-level navigation across tasks, as the Local Policy and Mapper are task-invariant. 3https://evalai.cloudcv.org/web/challenges/challenge-page/254 Sample efficiency. RL models are typically trained for more than 10 million samples. In order to compare the performance and sample-efficiency, we trained the best performing RL model (RL + Res18 + GRU + ProjDepth) for 75 million frames and it achieved a Succ/SPL of 0.678/0.486. ANS reaches the performance of 0.789/0.703 SPL/Succ at only 1 million frames. These numbers indicate that ANS achieves > 75× speedup as compared to the best RL baseline. Domain and Goal Generalization: In Table 3 (see shaded region), we evaluate all the baselines and ANS trained on the PointGoal task in the Gibson domain on the test set in Matterport domain as well as the harder goal sets in Gibson. We also transfer ANS trained on Exploration in Gibson on all the 3 sets. The results show that ANS outperforms all the baselines at all generalization sets. Interestingly, RL based methods almost fail completely on the Hard-Dist set. We also analyze the performance of the proposed model as compared to two best baselines CMP and IL + Res18 + GRU as a function of geodesic distance to goal and GED ratio in Figure 7. The performance of the baselines drops faster as compared to ANS, especially with increase in goal distance. This indicates that end-to-end learning methods are effective at short-term navigation but struggle when long-term planning is required to reach a distant goal. In Figure 8, we show some example trajectories of the ANS model along with the predicted map. The successful trajectories indicate that the model exhibits strong backtracking behavior which makes it effective at distant goals requiring long-term planning. Figure 9 visualizes a trajectory in the PointGoal task show first-person observation and corresponding map predictions. Please refer to the project webpage for visualization videos. Habitat Challenge Results. We submitted the ANS model to the CVPR 2019 Habitat Pointgoal Navigation Challenge. The results are shown in Figure 6. ANS was submitted under code-name ‘Arnold’. ANS was the winning entry for both RGB and RGB-D tracks among over 150 submissions from 16 teams, achieving an SPL of 0.805 (RGB) and 0.948 (RGB-D) on the Test Challenge set. B NOISE MODEL IMPLEMENTATION DETAILS In order to implement the actuation and sensor noise models, we would like to collect data for navigational actions in the Habitat simulator. We use three default navigational actions: Forward: move forward by 25cm, Turn Right: on the spot rotation clockwise by 10 degrees, and Turn Left: on the spot rotation counter-clockwise by 10 degrees. The control commands are implemented as uForward = (0.25, 0, 0), uRight : (0, 0,−10 ∗ π/180) and uLeft : (0, 0, 10 ∗ π/180). In practice, a robot can also rotate slightly while moving forward and translate a bit while rotating on-the-spot, creating rotational actuation noise in forward action and similarly, a translation actuation noise in on-the-spot rotation actions. We use a Locobot 4 to collect data for building the actuation and sensor noise models. We use the pyrobot API (Murali et al., 2019) along with ROS (Quigley et al., 2009) to implement the control commands and get sensor readings. In order to get an accurate agent pose, we use an Hokuyo UST-10LX Scanning Laser Rangefinder (LiDAR) which is especially very precise in our scenario as we take static readings in 2D (Kohlbrecher et al., 2011). We install the LiDAR on the Locobot by replacing the arm with the LiDAR. We note that the Hokuyo UST-10LX Scanning Laser Rangefinder is an expensive sensor. It costs $1600 as compared to the whole Locobot costing less than $2000 without the arm. Using expensive sensors can improve the performance of a model, however for a method to be scalable, it should ideally work with cheaper sensors too. In order demonstrate the scalability of our method, we use the LiDAR only to collect the data for building noise models and not for training or deploying navigation policies in the real-world. For the sensor estimate, we use the Kobuki base odometry available in Locobot. We approximate the LiDAR pose estimate to be the true pose of the agent as it is orders of magnitude more accurate than the base sensor. For each action, we collect 600 datapoints from both the base sensor and the LiDAR, making a total of 3600 datapoints (600 ∗ 3 ∗ 2). We use 500 datapoints for each action to fit the actuation and sensor noise models and use the remaining 100 datapoints for validation. For each action a, the LiDAR pose estimates gives us samples of p1 and the base sensor readings give us samples of p′1, i = 1, 2, . . . , 600. The difference between LiDAR estimates (p i 1) and control command (∆ua) gives us samples for the actuation noise for the action a: iact,a = p i 1 −∆ua and difference between base sensor readings and LiDAR estimates gives us the samples for the sensor noise, isen,a = p i′ 1 − pi1. For each action a, we fit a separate Gaussian Mixture Model for the actuation noise and sensor noise using samples iact,a and i sen,a respectively, making a total of 6 models. We fit Gaussian mixture models with number of components ranging from 1 to 20 for and pick the model with highest likelihood on the validation set. Each component in these Gaussian mixture models is a multi-variate Gaussian in 3 variables, x, y and o. We implement these actuation and sensor noise models in the Habitat simulator for our experiments. 4http://locobot.org C NEURAL SLAM MODULE IMPLEMENTATION DETAILS The Neural SLAM module (fMap) takes in the current RGB observation, st ∈ R3×H×W , the current and last sensor reading of the agent pose x′t−1:t and the map at the previous time step mt−1 ∈ R2×M×M and outputs an updated map, mt ∈ R2×M×M (see Figure 2): mt, x̂t = fMap(st, x ′ t−1:t, x̂t−1,mt−1|θM , bt−1) where θM denote the trainable parameters and pt−1 denotes internal representations of the Neural SLAM module. The Neural SLAM module can be broken down into two parts, a Mapper (fPr) and a Pose Estimator Unit (fPE ,). The Mapper outputs a egocentric top-down 2D spatial map, pegot ∈ [0, 1]2×V×V (where V is the vision range), predicting the obstacles and the explored area in the current observation: pegot = fPr(st|θPr), where θPr are the parameters of the Mapper. It consists of Resnet18 convolutional layers to produce an embedding of the observation. This embedding is passed through two fully-connected layers followed by 3 deconvolutional layers to get the first-person top-down 2D spatial map prediction. Now, we would like to add the egocentric map prediction (pegot ) to the geocentric map from the previous time step (mt−1). In order to transform the egocentric map to the geocentric frame, we need the pose of the agent in the geocentric frame. The sensor reading x′t is typically noisy. Thus, we have a Pose Estimator to correct the sensor reading and give an accurate estimate of the agent’s geocentric pose. In order to estimate the pose of the agent, we first calculate the relative pose change (dx) from the last time step using the sensor readings at the current and last time step (x′t−1, x ′ t). Then we use a Spatial Transformation (Jaderberg et al., 2015) on the egocentric map prediction at the last frame (pegot−1) based on the relative pose change (dx), p ′ t−1 = fST (p ego t−1|dx). Note that the parameters of this Spatial Transformation are not learnt, but calculated using the pose change (dx). This transforms the projection at the last step to the current egocentric frame of reference. If the sensor was accurate, p′t−1 would highly overlap with p ego t . The Pose Estimator Unit takes in p ′ t−1 and p ego t as input and predicts the relative pose change: ˆdxt = fPE(p′t−1, p ego t |θPE) The intuition is that by looking at the egocentric predictions of the last two frames, the pose estimator can learn to predict the small translation and/or rotation that would align them better. The predicted relative pose change is then added to the last pose estimate to get the final pose estimate x̂t = x̂t−1 + ˆdxt. Finally, the egocentric spatial map prediction is transformed to the geocentric frame using the current pose prediction of the agent (x̂t) using another Spatial Transformation and aggregated with the previous spatial map (mt−1) using Channel-wise Pooling operation: mt = mt−1 + fST (pt|x̂t). Combing all the functions and transformations: mt = fMap(st, x ′ t−1:t,mt−1|θM , bt−1) = mt−1 + fST (pt|x′t + fPE(fST (p ego t−1|xt−1:t), fPr(st|θPr)|θPE)) where θPr, θPE ∈ θM , and pegot−1 ∈ bt−1 D ARCHITECTURE DETAILS We use PyTorch (Paszke et al., 2017) for implementing and training our model. The Mapper in the Neural SLAM module consists of ResNet18 convolutional layers followed by 2 fully-connected layers trained with dropout of 0.5, followed by 3 deconvolutional layers. The Pose Estimator consists of 3 convolutional layers followed by 2 fully connected layers. The Global Policy is a 5 layer fully-convolutional network, while the Local Policy consists of a 3-layer Convolutional network followed by a GRU. The Global and Local policies are both trained using Reinforcement Learning. The reward for the Global policy is the increase in coverage and the reward for the Local policy is the reduction in Euclidean distance to the short-term goal. Our PPO (Schulman et al., 2017) implementation of Global and Local policy is based on Kostrikov (2018). In addition to the RGB observation, the Local policy receives relative distance and angle to short-term goal, current timestep and last action as input. We bin the relative distance (bin size increasing with distance), relative angle (5 degree bins) and current timestep (30 time step bins) before passing them through embedding layers. This kind of discretization is used previously for RL policies (Lample and Chaplot, 2017; Chaplot and Lample, 2017) and it improved the sample efficiency as compared to passing the continuous values as input directly. For fair comparison, we use the same discretization for all the baselines as well. E HYPERPARAMETER DETAILS We train all the components with 72 parallel threads, with each thread using one of the 72 scenes in the Gibson training set. This leads to a batch size of 72 for training the Neural SLAM module. The Global policy samples a new goal every 25 timesteps. We use Proximal Policy Optimization (Schulman et al., 2017) for training the Global and Local policies with 72 parallel threads and a horizon length of 25 steps for the Local policy and 20 steps for the Global policy (20 steps for Global policy is equivalent to 500 low-level timesteps as Global policy samples a new goal after every 25 timesteps). We use Adam optimizer with a learning rate of 0.0001 for training both the units in the Neural SLAM module and Adam with a learning rate of 0.00025 for training the Global and Local policies. We use a discount factor of γ = 0.99, entropy coefficient of 0.001, value loss coefficient of 0.5 for training both the Global and Local policies. Input frame size is 128× 128, the vision range for the SLAM module is V = 64, i.e. 3.2m (each cell is 5cm in length). Since there are no parameters dependent on the map size, it can be adaptive. We train with a map size of M = 960 (equivalent to 48m). A map of size 48m× 48m is large enough for all scenes in the Gibson val set. We use an adaptive map size for Pointgoal evaluation such that goal lies within central 50% of the map to handle even larger maps in the unseen test set. For the exploration task, we train and test with a constant M = 960. For the Global policy in the Exploration task, the size of the Global Policy input is G = 240. F ADDITIONAL RESULTS
1. What is the focus of the paper in terms of machine learning and robotics? 2. What are the strengths of the proposed approach, particularly in combining classical methods with learning-based ones? 3. How does the reviewer assess the quality and effectiveness of the experiments conducted in the paper? 4. Are there any concerns or limitations regarding the applicability of the proposed method in real-world scenarios?
Review
Review The paper describes ANM, active neural mapping, to learn policies for efficiently exploring 3d environments. The paper combines classical methods with learning based approaches, allowing the final system to work competitively with raw sensory inputs without requiring unreasonable amounts of training samples. I think this is a well-written "ML-systems paper" and I'm especially happy that real-world aspects of mobile robots are taken into account. I was able to follow the overall idea of the approach as well as the description of the three components. I also think that the experiments are well done, showing convincingly ANMs competitive performance and demonstrate, through the ablation studies, the importance of its constituting parts.
ICLR
Title Backstepping Temporal Difference Learning Abstract Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning with correction (TDC), have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective, and propose a new convergent algorithm. Our method relies on the backstepping technique, which is widely used in nonlinear control theory. Finally, convergence of the proposed algorithm is experimentally verified in environments where the standard TD-learning is known to be unstable. 1 INTRODUCTION Since Mnih et al. (2015), which has demonstrated that deep reinforcement learning (RL) outperforms human in several video games (Atari 2600 games), significant advances has been made in RL theory and algorithms. For instance, Van Hasselt et al. (2016); Lan et al. (2020); Chen et al. (2021) proposed some variants of the so-called deep Q-network (Mnih et al., 2015) that achieves higher scores in Atari games than the original deep Q-network. An improved deep RL was developed in Badia et al. (2020) that performs better than average human scores across 57 Atari games. Not only performing well in video games, but Schrittwieser et al. (2020) also have shown that an RL agent can self-learn chess, Go, and Shogi. Furthermore, RL has shown great success in real world applications, e.g., robotics (Kober et al., 2013), healthcare (Gottesman et al., 2019), and recommendation systems (Chen et al., 2019). Despite the practical success of deep RL, there is still a gap between theory and practice. One of the notorious phenomena is the deadly triad (Sutton & Barto, 2018), the diverging issue of the algorithm when function approximation, off-policy learning, and bootstrapping are used together. One of the most fundamental algorithms, the so-called temporal-difference (TD) learning (Sutton, 1988), is known to diverge under the deadly triad, and several works have tried to fix this issue for decades. In particular, the seminar works Sutton et al. (2008; 2009) introduced the so-called GTD, gradient-TD2 (GTD2), and TDC, which are off-policy, and have been proved to be convergent with linear function approximation. More recently, Ghiassian et al. (2020) suggested regularized version of TDC called TD learning with regularized correction (TDRC), and showed its favorable features under off-policy settings. Moreover, Lee et al. (2021) developed several variants of GTD based on primal dual formulation. On the other hand, backstepping control (Khalil, 2015) is a popular method in designing stable controllers for nonlinear systems with special structures. The design technique offers a wide range of stable controllers, and is proved to be robust under various settings. It has been used in various fields including quadrotor helicopters (Madani & Benallegue, 2006), mobile robots (Fierro & Lewis, 1997), and ship control (Fossen & Strand, 1999). Using backstepping control technique, in this paper, we develop a new convergent off-policy TD-learning which is a single time-scale algorithm. In particular, the goal of this paper is to introduce a new unifying framework to design off-policy TDlearning algorithms under linear function approximation. The main contributions are summarized as follows: • We propose a systemic way to generate off-policy TD-learning algorithms including GTD2 and TDC from control theoretic perspective. • Using our framework, we derive a new TD-learning algorithm, which we call backstepping TD (BTD). • We experimentally verify its convergence and performance under various settings including where off-policy TD has known to be unstable. In particular, most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) are derived based on optimization perspectives starting with an objective function. Then, the convergence is proved by proving stability of the corresponding O.D.E. models. In this paper, we follow reversed steps, and reveal that an off-policy TD-learning algorithm (called backstepping TD) can be derived based on control theoretic motivations. In particular, we develop stable O.D.E. models first using the backstepping technique, and then recover back the corresponding off-policy TD-learning algorithms. The new analysis reveals connections between off-policy TD-learning and notions in control theory, and provides additional insights on off-policy TD-learning with simple concepts in control theory. This sound theoretical foundation established in this paper can potentially motivate further analysis and developments of new algorithms. Finally, we briefly summarize TD learning algorithms that guarantee convergence under linear function approximation. GTD (Sutton et al., 2008), GTD2 and TDC (Sutton et al., 2009) have been developed to approximate gradient on mean squared projected Belllman error. Later, GTD and GTD2 has been discovered to solve minimax optimization problem (Macua et al., 2014; Liu et al., 2020). Such sadde-point view point of GTD has led to many interesting results including Du et al. (2017); Dai et al. (2018); Lee et al. (2021). TDRC (Ghiassian et al., 2020) adds an additional term similar to regularization term to one-side of parameter update, and tries to balance between the performance of TD and stability of TDC. TDC++ (Ghiassian et al., 2020) also adds an additional regularization term on both sides of the parameter update. Even though TDRC shows good performance, it uses additional parameter condition to ensure convergence, whereas TDC++ does not. 2 PRELIMINARIES 2.1 NONLINEAR SYSTEM THEORY Nonlinear system theory will play an important role throughout this paper. Here, we briefly review basics of nonlinear systems. Let us consider the continuous-time nonlinear system ẋt = f(xt, ut), x0 2 Rn, (1) where x0 2 Rn is the initial state, t 2 R, t 0 is the time, xt 2 Rn is the state, ut 2 Rn is the control input, and f : Rn ⇥ Rn ! Rn is a nonlinear mapping. An important concept in dealing with nonlinear systems is the equilibrium point. Considering the state-feedback law ut = µ(xt), the system can be written as ẋt = f(xt, ut) = f(xt, µ(xt)) =: f(xt), and a point x = xe in the state-space is said to be an equilibrium point of (1) if it has the property that whenever the state of the system starts at xe, it will remain at xe (Khalil, 2015). For ẋt = f(xt), the equilibrium points are the real roots of the equation f(x) = 0. The equilibrium point xe is said to be globally asymptotically stable if for any initial state x0 2 Rn, xt ! xe as t ! 1. An important control design problem is to construct a state-feedback law ut = µ(xt) such that the origin becomes the globally asymptotically stable equilibrium point of (1). To design a statefeedback law to meet such a goal, control Lyapunov function plays a central role, which is defined in the following definition. Definition 2.1 (Control Lyapunov function (Sontag, 2013)). A positive definite function V : Rn ! R is called a control Lyapunov function (CLF) if for all x 6= 0, there exists a corresponding control input u 2 Rm that satisfies the inequality, rxV (x)>f(x, u) < 0 for all x 6= 0. Once such a CLF is found, then it guarantees that there exists the control law that stabilizes the system. Moreover, the corresponding state-feedback control law can be extracted from the CLF, e.g., µ(x) = argminu rxV (x)>f(x, u) provided that the minimum exists and unique. The concept of control Lyapunov function will be used in the derivations of our main results. For the autonomous system, ẋt = f(xt), and Lypaunov function V : Rn ! R, Lie derivative is defined as LfV (x) := rxV (x)>f(x) so that V̇ (xt) = LfV (xt) along the solution. 2.2 STOCHASTIC APPROXIMATION AND O.D.E. APPROACH Including Q-learning (Watkins & Dayan, 1992) and TD-learning (Sutton, 1988), reinforcement learning algorithms can be considered as stochastic approximation (Robbins & Monro, 1951) described by xk+1 = xk + ↵k(f(xk) + ✏k), (2) where f : Rn ! Rn is a nonlinear mapping, and ✏k is an i.i.d. noise. Borkar and Meyn theorem (Borkar & Meyn, 2000) is a well-known method to bridge the asymptotic convergence of stochastic approximation and the stability of its corresponding O.D.E. model, which can be expressed as ẋt = f(xt), x0 2 Rn, (3) where x0 2 Rn is initial state, and t 2 R, t 0 is the time. Borkar and Meyn theorem (Borkar & Meyn, 2000) states that under the conditions in Assumption 7.1 in the Appendix, global asymptotic stability of the O.D.E. (3) leads to asymptotic convergence of the stochastic approximation update (2), which is formally stated in the following lemma. Lemma 2.1 (Borkar and Meyn theorem (Borkar & Meyn, 2000)). Suppose that Assumption 7.1 in the Appendix holds, and consider the stochastic approximation in (2). Then, for any initial x0 2 Rn, supk 0 ||xk|| < 1 with probability one. In addition , xk ! xe as k ! 1 with probability one, where xe is the unique equilibrium point of the O.D.E. in (3). The main idea of Borkar and Meyn theorem is as follows: iterations of a stochastic recursive algorithm follow the solution of its corresponding O.D.E. in the limit when the step-size satisfies the so-called Robbins-Monro condition (Robbins & Monro, 1951) in (33) in the Appendix. Therefore, by proving asymptotic stability of the O.D.E., we can induce convergence of the original algorithm. In this paper, we will use an O.D.E. model of TD-learning, which is expressed as a linear timeinvariant system. 2.3 BACKSTEPPING CONTROL This section provides the concept of the backstepping control (Kokotovic, 1992; Khalil, 2015), which will be the main tool in this paper to derive TD-learning algorithms. The backstepping technique is a popular tool for generating a CLF (control Lyapunov function) for nonlinear systems with specific structures. In particular, let us start with the following general nonlinear system: ẏt = f(yt) + g(yt)xt (4) ẋt = ut, where yt 2 Rm, xt 2 Rm are the states, ut 2 Rm is the input, and f : Rm ! Rm and g : Rm ! R are continuous functions. The first system is a nonlinear system with a particular affine structure, and the second system is simply an integrator. It can be seen as a cascade interconnection of two systems, where the second system’s state is injected to the input of the first system. The backstepping control technique gives us a systematic way to generate a CLF for such particular nonlinear systems provided that the first system admits a CLF independently. To this end, we suppose that the first system admits a CLF. Through the backstepping approach, designing a stable control law for the above system can be summarized in the following steps: Step 1. Consider xt in (4) as virtual input x̃(yt) (state-feedback controller), and consider the following system: ̇t = f(yt) + g(yt)x̃(yt). Design x̃(yt) such that the above system admits a CLF V , i.e., it admits a positive definite and radially unbounded function V such that its time derivative is negative definite, i.e.,V̇ (yt) < 0, 8yt 6= 0. Step 2. Denote the error between the virtual state-feedback controller x̃(yt) and state variable xt as zt := xt x̃(yt). Now, rewrite the original O.D.E. in (4) with the new variable (yt, zt): d dt yt zt = f(yt) + g(yt)x̃(yt) + g(yt)zt ut ˙̃x(yt) Step 3. Design the control input ut such that the above system is stable. One popular choice is to consider the CLF Vc(yt, zt) := V (yt) + ||zt||2/2, where V (yt) is defined in Step 1. Then choose ut such that the time derivative of Vc(yt, zt) to be negative definite. A simple example of designing stabilizing control law by backstepping technique is given in Appendix Section 7.3. 2.4 MARKOV DECISION PROCESS In this paper, we consider a Markov decision process (MDP) characterized by the tuple (S,A,P, , r), where S := {1, 2, . . . , |S|} stands for the set of finite state space, |S| denotes the size of S , A := {1, 2, . . . , |A|} denotes the set of finite action space, |A| is the size of A, 2 (0, 1) is the discount factor, P : S ⇥ A ⇥ S ! [0, 1] denotes the Markov transition kernel, and r : S ⇥A ⇥ S ! R means the reward function. In particular, if an agent at state s 2 S , takes action a 2 A, then the current state transits to the next state s0 2 S with probability P(s, a, s0), and the agent receives reward r(s, a, s0). Each element of the state to state transition matrix under policy ⇡, denoted by P⇡ 2 R|S|⇥|S| is [P⇡]ij := P a2A ⇡(a|i)P(i, a, j), 1 i, j |S|, where [P⇡]ij corresponds to i-th row and j-th column element of matrix P⇡ . Moreover, the stationary state distribution induced by policy µ, is denoted as dµ : S ! [0, 1], i.e., dµ>Pµ = dµ>. With the above setup, we define the following matrix notations: Dµ := 2 64 dµ(1) . . . dµ(|S|) 3 75 2 R|S|⇥|S|, R⇡ = 2 664 Ea⇠⇡[r(s, a, s0)|s = 1] Ea⇠⇡[r(s, a, s0)|s = 2] ... Ea⇠⇡[r(s, a, s0)|s = |S|] 3 775 2 R |S|, where Dµ is a diagonal matrix of the state distribution induced by behavior policy µ, each element of R⇡ is the expected reward under policy ⇡ at the corresponding state. The policy evaluation problem aims to approximate the value function at state s 2 S , v⇡(s) := E ⇥P1 k=0 kr(Sk, Ak, Sk+1) S0 = s,⇡ ⇤ , where the trajectory is generated under policy ⇡ : S ⇥ A ! [0, 1]. In this paper, we consider the linear function approximation to approximate the value function v⇡(s). In particular, we parameterize the value function v⇡(s) with >(s)⇠, where : S ! Rn is a pre-selected feature vector with (s) := [ 1(s) · · · n(s)], 1, . . . , n : S ! R are feature functions, and ⇠ 2 Rn is the learning parameter. The goal of the policy evaluation problem is then to approximate the value function v⇡(s) using this linear parameterization, i.e., >(s)⇠ ⇡ v⇡(s). Moreover, using the matrix notation := [ (1), (2), · · · , (|S|)]> 2 R|S|⇥n, called the feature matrix, the linear parameterization can be written in the vector form ⇠. We also assume that is full column rank matrix throughout the paper, which is a standard assumption (Sutton et al., 2008; 2009; Ghiassian et al., 2020; Lee et al., 2021). 2.5 TEMPORAL DIFFERENCE LEARNING This section provides a brief background on TD-learning (Sutton, 1988). Suppose that we have access to stochastic samples of state sk from the state stationary distribution induced by the behavior policy µ, i.e., sk ⇠ dµ(·), and action is chosen under behavior policy µ, i.e., ak ⇠ µ(·|sk). Then, we observe the next state s0k following s 0 k ⇠ P(·, ak, sk), and receive the reward rk := r(sk, ak, s0k). Using the simplified notations for the feature vectors k := (sk), 0 k = (s 0 k). the TD-learning update at time step k with linear function approximation can be expressed as ⇠k+1 = ⇠k+↵k⇢k k(⇠k) k, where ↵k > 0 is the step-size, k(⇠k) := rk+ 0>k ⇠k >k ⇠k is called the temporal difference or temporal difference error (TD-error), and ⇢k := ⇢(sk, ak) = ⇡(ak|sk) µ(ak|sk) is called the importance sampling ratio (Precup et al., 2001). The importance sampling ratio reweights the TD-error to handle the mismatch between the behavior policy µ and target policy ⇡. It is known that TD-learning with linear function approximation and off-policy learning scheme does not guarantee convergence in general. The above stochastic approximation aims to find fixed point of the following projected Bellman equation, which is, after some manipulations, expressed as: >Dµ ⇠⇤ >DµP⇡ ⇠⇤ = >DµR⇡. (5) To simplify the expressions, let use introduce one more piece of notations: A := Es⇠dµ(s),s0⇠P⇡(s0|s)[ (s)( (s) (s0))>] = >Dµ DµP⇡ 2 Rn⇥n, b := Es⇠dµ(s),a⇠⇡(a|s),s0⇠P (s0|s,a)[r(s, a, s0) (s)] = >DµR⇡ 2 Rn⇥1. Even though we can use arbitrary distribution, for simplicity we assume stationary distribution of µ. Now, we can rewrite (5) compactly as A⇠⇤ = b. (6) The corresponding O.D.E. for TD-learning can be written as ⇠̇t = A⇠t b, ⇠0 2 Rn. Using the coordinate transform xk := ⇠k ⇠⇤, we get the O.D.E. ẋt = Axt, x0 2 Rn, whose origin is globally asymptotically stable equilibrium point if ⇢(s, a) = ⇡(a|s)µ(a|s) = 1 for all (s, a) 2 S ⇥ A. Throughout the paper we will use the vector xk := ⇠k ⇠⇤ to represent the coordinate transform of ⇠k to the origin, and will use ⇠t and xt to denote the corresponding continuous-time counterparts of ⇠k and xk, respectively. 2.6 GRADIENT TEMPORAL DIFFERENCE LEARNING To fix the instability issue of off-policy TD-learning under linear function approximation, Sutton et al. (2008) and Sutton et al. (2009) introduced various stable off-policy TD-learning algorithms, called GTD (gradient TD-learning), GTD2, and TDC (temporal difference correction). The idea behind these algorithms is to minimize the mean-square error of projected Bellman equation (MSPBE) min⇠2Rn 12 || >Dµ(R⇡+ P⇡ ⇠ ⇠)||2( >Dµ ) 1 , where ||x||D := p x>Dx, and the global minimizer of MSPBE corresponds to the solution of (6). The core idea of the algorithms is to introduce an additional variable k 2 Rn to approximate the stochastic gradient descent method for MSPBE as an objective function. In particular, GTD2 update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k, ⇠k+1 = ⇠k + ↵k( >k k k ⇢k >k k 0k). We denote t to denote continuous time part of k. Since the fixed point for k is zero, it doesn’t require coordinate transformation. It is a single time-scale algorithm because it uses a single stepsize ↵k. The corresponding O.D.E. is expressed as ̇t = C t Axt, ẋt = A> t, where C := Es⇠dµ(s)[ (s) >(s)] = >Dµ 2 Rn⇥n. Similarly, TDC update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (7) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (8) where the step-sizes, ↵k and k, satisfy ↵k/ k ! 0 as k ! 1 and the Robbins and Monro step-size condition (Robbins & Monro, 1951) in (33) in Appendix. It is a two time-scale algorithm because it uses two time-steps, ↵k and k. 3 DESIGNING TD-LEARNING THROUGH BACKSTEPPING We briefly explain the motivation for our algorithmic development. Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1 is a typical tool to prove convergence of Qlearning (Borkar & Meyn, 2000; Lee & He, 2019) and TD-learning (Sutton et al., 2009; Lee et al., 2021). Most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) first start with an objective function, and then derive GTD algorithms based on optimization perspectives. Then, the convergence is proved using the corresponding O.D.E. models and stability theory of linear time-invariant systems. A natural question arises is, can we derive off-policy TDlearning algorithms following a reversed step? In other words, can we develop a stable O.D.E. model first using tools in control theory, and then recover back the corresponding off-policy TD-learning algorithms? In this paper, we reveal that a class of off-policy TD-learning algorithms can be derived based on purely control theoretic motivations following such a reversed process. By doing so, this work provides additional insights on off-policy TD-learning algorithms and gives a sound theoretical foundation on off-policy TD-learning algorithms for further developments of new algorithms. Designing stabilizing control laws for continuous-time nonlinear system has been successful over the past decades (Khalil, 2015). One such technique, so called backstepping, is a popular controller design method in non-linear control literature (Khalil, 2015). With the help of the backstepping method (Khalil, 2015), we design stabilizing control laws for continuous-time systems, and then the corresponding off-policy TD-learning algorithms are derived, and are shown to be convergent via Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1. The brief procedure is explained in the following steps: Step 1) Choose an appropriate continuous-time dynamic model such that (a) we can recover the TD-fixed point ⇠⇤ in (6) via its equilibrium point; (b) the corresponding stochastic approximation algorithm can be implementable only through transitions of MDP and accessible data.; Step 2) Using the backstepping method, design a control input to stabilize the dynamic model chosen in Step 1). 3.1 BACKSTEPPING TD Now, we introduce a new off-policy TD-learning algorithm, which we call Backstepping TD (BTD). Firstly, we will develop a stabilizing control law for the following the continuous-time system: ̇t = ( C + ⌘A) t Axt (9) ẋt = ut (10) The idea stems from finding a control system for which we can easily apply the backstepping techinque. In details, the backstepping techinqiue can be applied to the two interconnected systems where one subsystem, namely (4), can be stabilized with xt in (4) as a control input. Therefore, our first aim is to find such a system. To this end, we can try a natural choice of O.D.E. to solve the TD problem, i.e., ̇t = A t, which is however unstable in the off-policy case. Therefore, we can develop a modified O.D.E. ̇t = ( C + ⌘A) t Axt, where xt is the control input, the negative definite matrix C is introduced to stabilize the system, and ⌘ > 0 is introduced to provide additional degrees of freedom in design. Now, the constructed system can be stabilized through the state-feedback controller xt = ⌘ t and admits the simple control Lypaunov function V ( ) = || ||2. Moreover, A should be included in the right-hand side in order to implement the corresponding algorithm without knowing the solution because xk = ⇠k ⇠⇤ and ⇠⇤ should be removed using A⇠⇤ = b in the final step. Simply setting xt = ⌘ t may cancel out A in the right-hand side, the O.D.E. becomes ̇t = C t, Therefore, as mentioned before, we can apply the backstepping technique by adding an additional dynamic controller. As the next step, the backstepping technique is applied, and one needs to observe what would be the final form of the control system. In summary, if we consist f( t) with the combination of A and C (not necessarily C, it may be I) , it can be a reasonable candidate to apply the backstepping technique. Cancelling A with virtual input only leaves C, which guarantees stability from its negative definiteness. Therefore, (9) and (10) is a reasonable candidate for the dynamics where we can apply the backstepping technique. In particular, our aim is to design an appropriate control input ut for the above system such that the origin is the unique asymptotically stable equilibrium point, i.e., ( t, xt) ! 0 as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. The overall procedure is depicted in Figure 1 in the Appendix, and we show how to choose the control input ut in the following lemma. Lemma 3.1. Consider the O.D.E. in (9) and (10). If we choose the control input ut := (A>+⌘2A ⌘C) t ⌘Axt, then the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. Proof sketch. The proof follows the steps given in the backstepping scheme in Section 3. First, substituting xt in (9) with a virtual controller x̃( t), we will design a control law x̃( t) that stabilizes the following new virtual system: ̇t = ( C + ⌘A) t Ax̃( t). (11) One natural choice of the virtual controller is x̃( t) = ⌘ t. Plugging it into (11) leads to ̇t = C t, and we can verify the global asymptotic stability of the above system with the following Lyapunov function: V ( t) := || t||22 2 . (12) We now consider the original O.D.E. in (9) and (10). Applying simple algebraic manipulations yield ̇t = C t A(xt ⌘ t), ẋt = ut. The error between xt and the virtual controller x̃( t) can be expressed as new variable zt, which is zt := xt x̃( t) = xt ⌘ t. Rewriting the O.D.E. in (9) and (10) with ( t, zt) coordinates, we have ̇t = C t Azt (13) żt = ut + ⌘C t + ⌘Azt. To prove the global asymptotic stability of the above system, consider the function Vc( t, zt) := V ( t)+ 1 2 ||zt|| 2 2 where V ( t) is defined in (12). By taking ut as ut = A> t ⌘C t ⌘Azt, we can apply LaSall’es invariance principle in Lemma 7.1. The full proof is in Appendix Section 7.4.1. Using the relation zt := xt ⌘ t, the control input in the original coordinate ( t, xt) can be written as ut := A> t ⌘C t ⌘Azt = (A> + ⌘2A ⌘C) t ⌘Axt. Plugging this input into the original open-loop system in (9) and (10), the closed-loop system in the original coordinate ( t, xt) can written as ̇t = ( C + ⌘A) t Axt (14) ẋt = (A > + ⌘2A ⌘C) t ⌘Axt, (15) whose origin is also globally asymptotically stable according to Lemma 3.1. Recovering back from xt to ⇠t, we have ddt t ⇠t = C + ⌘A A A> + ⌘2A ⌘C ⌘A t ⇠t + b ⌘b . The corresponding stochastic approximation of the O.D.E. in Theorem 3.1 becomes k+1 = k + ↵k((( 1 + ⌘) >k ⌘⇢k 0>k ) k + ⇢k k(⇠k)) k (16) ⇠k+1 = ⇠k + ↵k((( ⌘ + ⌘2) >k ⌘2⇢k 0>k ) k k + ⌘⇢k k(⇠k) k + ( >k k k ⇢k >k k 0k)). (17) The equilibrium point of the above O.D.E. is (0, ⇠⇤). Hence, we only need to transform the coordinate of ⇠t to xt = ⇠t ⇠⇤, which results to the O.D.E. in (14) and (15). With the above result, we are now ready to prove convergence of Algorithm 1. The proof simply follows from Borkar and Meyn theorem in Lemma 2.1, of which the details can be found in Sutton et al. (2009). Theorem 3.1. Under the step size condition (33) , with Algorithm 1 in Appendix, ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). Proof. The proof is done by checking Assumption 7.1 in Appendix. Remark 3.1. Theorem 3.1 doesn’t require any condition on ⌘. Therefore, we can set ⌘ = 0, which results to GTD2 developed in Sutton et al. (2009). 3.2 RECOVERING SINGLE TIME-SCALE TDC In this section, we derive a single-time scale version of TDC (Sutton et al., 2009) through the backstepping design in the previous section. TDC (Sutton et al., 2009) was originally developed as a two-time scale algorithm in Sutton et al. (2009). Even though the two time-scale method provides theoretical guarantee for a larger class of algorithms, the single time-scale scheme provides more simplicity in practice, and shows faster convergence empirically. Subsequently, Maei (2011) provided a single-time scale version of TDC by multiplying a large enough constant ⌘ > 0 to the faster time scale part (7), which leads to k+1 = k + k⌘( >k k + ⇢k k(⇠k)) k (18) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (19) where ⌘ > max 0, min C 1(A+A>)/2 . (20) Here, we derive another version of single-time TDC by multiplying a constant to the slower timescale part in (8), which results in k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (21) ⇠k+1 = ⇠k + ↵k ( > k k k ⇢k >k k 0k + ⇢k k(⇠k) k), (22) where satisfies 0 < < min(C) min(A) if min(A) < 0, else > 0. (23) We can derive the above algorithm following similar steps as in Section 3.1. Let us first consider the following dynamic model: ̇t = C t Axt (24) ẋt = ut (25) Using the backstepping technique, we can prove that the above system admits the origin as a global asymptotically stable equilibrium point with the control input ut := (A> C) t A⇠t , which is shown in the following lemma: Lemma 3.2. Consider the O.D.E. in (24) and (25). Suppose that we choose the control input ut := (A> C) t A⇠t ), and satisfies condition (23). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof of Lemma 3.2 is given in Appendix Section 7.4.2. By Borkar and Meyn theorem in Lemma 2.1, we can readily prove the convergence of Algorithm 2 in Appendix, which uses stochastic recursive update (21) and (22). Theorem 3.2. Consider Algorithm 2 in Appendix. Under the step size condition (33), and if satisfies (23), ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). We will call the Algorithm 4 as TDC-slow, and single-time version of TDC suggested by Maei (2011) as TDC-fast. Other than the multiplication of a constant reflecting two-time scale property, we can make TDC into a single-time algorithm, which we call a single time-scale TDC2, while the original version in Maei (2011) will be called the single time-scale TDC. The derivation is given in Appendix Section 7.5. The performance of such versions of TDC are evaluated in Appendix Section 7.9.1. Even though not one of the algorithms outperforms each other, TDC-slow and TDC2 shows better performance in general. 3.3 GENERALIZING TDC++ This section provides versions of TDC++ (Ghiassian et al., 2020), which is variant of TDC. With an additional regularization term ⇠k on both updates of TDC in (7) and (8), the update is written as follows: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (26) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k k + ⇢k k(⇠k) k), (27) where ⌘ > 0 satisfies (20) and > 0 is a new parameter. Note that TDC++ can be simply viewed as variant of TDC by adding the term k in the update, which can be seen as a regularization term. Therefore, letting = 0 yields the original TDC. In this paper, we prove that our controller design leads to the following update: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (28) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k + (1 ⌘) >k k k ⌘ k + ⇢k⌘ k(⇠k) k), (29) where and are new parameters and when = 1/⌘ it becomes TDC++. The difference with the original TDC++ can be seen in their corresponding O.D.E. forms. The corresponding O.D.E. for (26) and (27) (original TDC++) can be expressed as: ddt t xt = ⌘(C + I) ⌘A A> C I A t xt . Meanwhile, the O.D.E. corresponding to (28) and (29) (new TDC++) becomes ddt t xt = ⌘(C + I) ⌘A A> ⌘(C + I) ⌘A t xt . We experiment under different of and ⌘ to examine the behavior of new TDC++. The result shows that in general, smaller leads to better performance. The results are given in Appendix Section 7.9. Lemma 3.3. Consider the following O.D.E.: ̇t = ⌘(C + I) t ⌘Axt (30) ẋt = ut. (31) Suppose that we choose the control input ut := (A> ⌘(C + I)) t ⌘Axt. Assume ⌘ > 0 and and satisfies the following condition: + min(A) > min(C). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof is given in Appendix Section 7.4.3. With Lemma 2.1, we can prove the convergence of stochastic update with (28) and (29) whose pseudo code is given in Algorithm 5 in Appendix. Theorem 3.3. Consider Algorithm 5 in Appendix. Under the step-size condition (33) and if ⌘ satisfies (20), then ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the TD fixed point in (6). Remark 3.2. We can replace the regularization term with nonlinear terms satisfying certain condi- tions. The details are given in Appendix Section 7.6. 4 EXPERIMENTS We verify the performance and convergence of the proposed BTD under standard benchmarks to evaluate off-policy TD-learning algorithms, including Baird environment (Baird, 1995), RandomWalk (Sutton et al., 2009) with different features, and Boyan chain (Boyan, 2002). The details about the environments are given in Appendix Section 7.7. From the experiments, we see how BTD behaves under different coefficients ⌘ 2 { 0.5, 0.25, 0, 0.25, 0.5}. We measure the Root Mean-Squared Projected Bellman Error (RMSPBE) as the performance metric, and every results are averaged over 100 runs. From Table 1, the result with ⌘ = 0.5 shows the best performance except at Baird, where ⌘ = 0, corresponding to GTD2 performs best. There exist two aspects on the role of ⌘. First of all, it can be thought of as a parameter that can mitigate the effect of instability coming from matrix A in (9). For example, a smaller ⌘ can stabilize the system. However, as a trade off, if ⌘ is too small, then the update rate might be too small as well. As a result, the overall convergence can be slower. Furthermore, ⌘ also controls the effect of C in (13) in the BTD update rules, where C corresponds to ( ⌘ + ⌘2) >k k k in (17). Note that the role of ⌘ in the final BTD update rule in (17) shows different perspectives compared to that in (9). In particular, ⌘ = 1/2 maximizes the effect of C in (17). From Table 1, it leads to reasonably good performances in most domains. Another natural choice is to multiply ⌘ to C instead of A. However, in such cases, we need to introduce another constrain ⌘ > 0, whereas in the current BTD, convergence is guaranteed for all ⌘ 2 R. Finally, we note that simply multiplying C by a large positive constant does not lead to good results in general. This is because in this case, it may increase variance, and destabilize the algorithm. Overall results are given in Appendix Section 7.8. 5 CONCLUSION In this work, we have proposed a new framework to design off-policy TD-learning algorithms from control-theoretic view. Future research directions would be extending the framework to non-linear function approximation setting. 6 ACKNOWLEDGEMENTS This work was supported by the National Research Foundation under Grant NRF2021R1F1A1061613, Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT)(No.2022-0-00469), and the BK21 FOUR from the Ministry of Education (Republic of Korea). (Corresponding author: Donghwan Lee.)
1. What is the focus and contribution of the paper regarding temporal-difference learning algorithms? 2. What are the strengths of the proposed approach, particularly in terms of stability and convergence? 3. What are the weaknesses of the paper, especially regarding the theoretical analysis and numerical comparisons? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides some temporal-difference (TD) learning algorithms based on the celebrated backstepping technique from control theory for addressing a family of nonlinear systems. The proposed TD algorithms are claimed to be stable and convergent in contrast with the divergent behavior of standard TD algorithms. Several existing TD generalizations have been shown as special cases of the proposed backstepping TD algorithm. Strengths And Weaknesses The paper draws ideas from control theory and presents stable and convergent backstepping TD algorithm for off-policy learning. The idea is interesting and seems effective. Strengths: Interesting combinations of backstepping control with model-free TD learning Well-established analysis and stability results General enough to accommodate existing TD modifications Weaknesses: Backstepping design and theory is suited for nonlinear control systems, but only linear BTD algorithms are discussed here Theoretic analysis (e.g., asymptotic/non-asymptotic convergence) of BTD is lacking Numerical convergence comparison against existing TD variants not provided Clarity, Quality, Novelty And Reproducibility The paper is generally clear, well-written, and contains some original ideas and algorithms.
ICLR
Title Backstepping Temporal Difference Learning Abstract Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning with correction (TDC), have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective, and propose a new convergent algorithm. Our method relies on the backstepping technique, which is widely used in nonlinear control theory. Finally, convergence of the proposed algorithm is experimentally verified in environments where the standard TD-learning is known to be unstable. 1 INTRODUCTION Since Mnih et al. (2015), which has demonstrated that deep reinforcement learning (RL) outperforms human in several video games (Atari 2600 games), significant advances has been made in RL theory and algorithms. For instance, Van Hasselt et al. (2016); Lan et al. (2020); Chen et al. (2021) proposed some variants of the so-called deep Q-network (Mnih et al., 2015) that achieves higher scores in Atari games than the original deep Q-network. An improved deep RL was developed in Badia et al. (2020) that performs better than average human scores across 57 Atari games. Not only performing well in video games, but Schrittwieser et al. (2020) also have shown that an RL agent can self-learn chess, Go, and Shogi. Furthermore, RL has shown great success in real world applications, e.g., robotics (Kober et al., 2013), healthcare (Gottesman et al., 2019), and recommendation systems (Chen et al., 2019). Despite the practical success of deep RL, there is still a gap between theory and practice. One of the notorious phenomena is the deadly triad (Sutton & Barto, 2018), the diverging issue of the algorithm when function approximation, off-policy learning, and bootstrapping are used together. One of the most fundamental algorithms, the so-called temporal-difference (TD) learning (Sutton, 1988), is known to diverge under the deadly triad, and several works have tried to fix this issue for decades. In particular, the seminar works Sutton et al. (2008; 2009) introduced the so-called GTD, gradient-TD2 (GTD2), and TDC, which are off-policy, and have been proved to be convergent with linear function approximation. More recently, Ghiassian et al. (2020) suggested regularized version of TDC called TD learning with regularized correction (TDRC), and showed its favorable features under off-policy settings. Moreover, Lee et al. (2021) developed several variants of GTD based on primal dual formulation. On the other hand, backstepping control (Khalil, 2015) is a popular method in designing stable controllers for nonlinear systems with special structures. The design technique offers a wide range of stable controllers, and is proved to be robust under various settings. It has been used in various fields including quadrotor helicopters (Madani & Benallegue, 2006), mobile robots (Fierro & Lewis, 1997), and ship control (Fossen & Strand, 1999). Using backstepping control technique, in this paper, we develop a new convergent off-policy TD-learning which is a single time-scale algorithm. In particular, the goal of this paper is to introduce a new unifying framework to design off-policy TDlearning algorithms under linear function approximation. The main contributions are summarized as follows: • We propose a systemic way to generate off-policy TD-learning algorithms including GTD2 and TDC from control theoretic perspective. • Using our framework, we derive a new TD-learning algorithm, which we call backstepping TD (BTD). • We experimentally verify its convergence and performance under various settings including where off-policy TD has known to be unstable. In particular, most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) are derived based on optimization perspectives starting with an objective function. Then, the convergence is proved by proving stability of the corresponding O.D.E. models. In this paper, we follow reversed steps, and reveal that an off-policy TD-learning algorithm (called backstepping TD) can be derived based on control theoretic motivations. In particular, we develop stable O.D.E. models first using the backstepping technique, and then recover back the corresponding off-policy TD-learning algorithms. The new analysis reveals connections between off-policy TD-learning and notions in control theory, and provides additional insights on off-policy TD-learning with simple concepts in control theory. This sound theoretical foundation established in this paper can potentially motivate further analysis and developments of new algorithms. Finally, we briefly summarize TD learning algorithms that guarantee convergence under linear function approximation. GTD (Sutton et al., 2008), GTD2 and TDC (Sutton et al., 2009) have been developed to approximate gradient on mean squared projected Belllman error. Later, GTD and GTD2 has been discovered to solve minimax optimization problem (Macua et al., 2014; Liu et al., 2020). Such sadde-point view point of GTD has led to many interesting results including Du et al. (2017); Dai et al. (2018); Lee et al. (2021). TDRC (Ghiassian et al., 2020) adds an additional term similar to regularization term to one-side of parameter update, and tries to balance between the performance of TD and stability of TDC. TDC++ (Ghiassian et al., 2020) also adds an additional regularization term on both sides of the parameter update. Even though TDRC shows good performance, it uses additional parameter condition to ensure convergence, whereas TDC++ does not. 2 PRELIMINARIES 2.1 NONLINEAR SYSTEM THEORY Nonlinear system theory will play an important role throughout this paper. Here, we briefly review basics of nonlinear systems. Let us consider the continuous-time nonlinear system ẋt = f(xt, ut), x0 2 Rn, (1) where x0 2 Rn is the initial state, t 2 R, t 0 is the time, xt 2 Rn is the state, ut 2 Rn is the control input, and f : Rn ⇥ Rn ! Rn is a nonlinear mapping. An important concept in dealing with nonlinear systems is the equilibrium point. Considering the state-feedback law ut = µ(xt), the system can be written as ẋt = f(xt, ut) = f(xt, µ(xt)) =: f(xt), and a point x = xe in the state-space is said to be an equilibrium point of (1) if it has the property that whenever the state of the system starts at xe, it will remain at xe (Khalil, 2015). For ẋt = f(xt), the equilibrium points are the real roots of the equation f(x) = 0. The equilibrium point xe is said to be globally asymptotically stable if for any initial state x0 2 Rn, xt ! xe as t ! 1. An important control design problem is to construct a state-feedback law ut = µ(xt) such that the origin becomes the globally asymptotically stable equilibrium point of (1). To design a statefeedback law to meet such a goal, control Lyapunov function plays a central role, which is defined in the following definition. Definition 2.1 (Control Lyapunov function (Sontag, 2013)). A positive definite function V : Rn ! R is called a control Lyapunov function (CLF) if for all x 6= 0, there exists a corresponding control input u 2 Rm that satisfies the inequality, rxV (x)>f(x, u) < 0 for all x 6= 0. Once such a CLF is found, then it guarantees that there exists the control law that stabilizes the system. Moreover, the corresponding state-feedback control law can be extracted from the CLF, e.g., µ(x) = argminu rxV (x)>f(x, u) provided that the minimum exists and unique. The concept of control Lyapunov function will be used in the derivations of our main results. For the autonomous system, ẋt = f(xt), and Lypaunov function V : Rn ! R, Lie derivative is defined as LfV (x) := rxV (x)>f(x) so that V̇ (xt) = LfV (xt) along the solution. 2.2 STOCHASTIC APPROXIMATION AND O.D.E. APPROACH Including Q-learning (Watkins & Dayan, 1992) and TD-learning (Sutton, 1988), reinforcement learning algorithms can be considered as stochastic approximation (Robbins & Monro, 1951) described by xk+1 = xk + ↵k(f(xk) + ✏k), (2) where f : Rn ! Rn is a nonlinear mapping, and ✏k is an i.i.d. noise. Borkar and Meyn theorem (Borkar & Meyn, 2000) is a well-known method to bridge the asymptotic convergence of stochastic approximation and the stability of its corresponding O.D.E. model, which can be expressed as ẋt = f(xt), x0 2 Rn, (3) where x0 2 Rn is initial state, and t 2 R, t 0 is the time. Borkar and Meyn theorem (Borkar & Meyn, 2000) states that under the conditions in Assumption 7.1 in the Appendix, global asymptotic stability of the O.D.E. (3) leads to asymptotic convergence of the stochastic approximation update (2), which is formally stated in the following lemma. Lemma 2.1 (Borkar and Meyn theorem (Borkar & Meyn, 2000)). Suppose that Assumption 7.1 in the Appendix holds, and consider the stochastic approximation in (2). Then, for any initial x0 2 Rn, supk 0 ||xk|| < 1 with probability one. In addition , xk ! xe as k ! 1 with probability one, where xe is the unique equilibrium point of the O.D.E. in (3). The main idea of Borkar and Meyn theorem is as follows: iterations of a stochastic recursive algorithm follow the solution of its corresponding O.D.E. in the limit when the step-size satisfies the so-called Robbins-Monro condition (Robbins & Monro, 1951) in (33) in the Appendix. Therefore, by proving asymptotic stability of the O.D.E., we can induce convergence of the original algorithm. In this paper, we will use an O.D.E. model of TD-learning, which is expressed as a linear timeinvariant system. 2.3 BACKSTEPPING CONTROL This section provides the concept of the backstepping control (Kokotovic, 1992; Khalil, 2015), which will be the main tool in this paper to derive TD-learning algorithms. The backstepping technique is a popular tool for generating a CLF (control Lyapunov function) for nonlinear systems with specific structures. In particular, let us start with the following general nonlinear system: ẏt = f(yt) + g(yt)xt (4) ẋt = ut, where yt 2 Rm, xt 2 Rm are the states, ut 2 Rm is the input, and f : Rm ! Rm and g : Rm ! R are continuous functions. The first system is a nonlinear system with a particular affine structure, and the second system is simply an integrator. It can be seen as a cascade interconnection of two systems, where the second system’s state is injected to the input of the first system. The backstepping control technique gives us a systematic way to generate a CLF for such particular nonlinear systems provided that the first system admits a CLF independently. To this end, we suppose that the first system admits a CLF. Through the backstepping approach, designing a stable control law for the above system can be summarized in the following steps: Step 1. Consider xt in (4) as virtual input x̃(yt) (state-feedback controller), and consider the following system: ̇t = f(yt) + g(yt)x̃(yt). Design x̃(yt) such that the above system admits a CLF V , i.e., it admits a positive definite and radially unbounded function V such that its time derivative is negative definite, i.e.,V̇ (yt) < 0, 8yt 6= 0. Step 2. Denote the error between the virtual state-feedback controller x̃(yt) and state variable xt as zt := xt x̃(yt). Now, rewrite the original O.D.E. in (4) with the new variable (yt, zt): d dt yt zt = f(yt) + g(yt)x̃(yt) + g(yt)zt ut ˙̃x(yt) Step 3. Design the control input ut such that the above system is stable. One popular choice is to consider the CLF Vc(yt, zt) := V (yt) + ||zt||2/2, where V (yt) is defined in Step 1. Then choose ut such that the time derivative of Vc(yt, zt) to be negative definite. A simple example of designing stabilizing control law by backstepping technique is given in Appendix Section 7.3. 2.4 MARKOV DECISION PROCESS In this paper, we consider a Markov decision process (MDP) characterized by the tuple (S,A,P, , r), where S := {1, 2, . . . , |S|} stands for the set of finite state space, |S| denotes the size of S , A := {1, 2, . . . , |A|} denotes the set of finite action space, |A| is the size of A, 2 (0, 1) is the discount factor, P : S ⇥ A ⇥ S ! [0, 1] denotes the Markov transition kernel, and r : S ⇥A ⇥ S ! R means the reward function. In particular, if an agent at state s 2 S , takes action a 2 A, then the current state transits to the next state s0 2 S with probability P(s, a, s0), and the agent receives reward r(s, a, s0). Each element of the state to state transition matrix under policy ⇡, denoted by P⇡ 2 R|S|⇥|S| is [P⇡]ij := P a2A ⇡(a|i)P(i, a, j), 1 i, j |S|, where [P⇡]ij corresponds to i-th row and j-th column element of matrix P⇡ . Moreover, the stationary state distribution induced by policy µ, is denoted as dµ : S ! [0, 1], i.e., dµ>Pµ = dµ>. With the above setup, we define the following matrix notations: Dµ := 2 64 dµ(1) . . . dµ(|S|) 3 75 2 R|S|⇥|S|, R⇡ = 2 664 Ea⇠⇡[r(s, a, s0)|s = 1] Ea⇠⇡[r(s, a, s0)|s = 2] ... Ea⇠⇡[r(s, a, s0)|s = |S|] 3 775 2 R |S|, where Dµ is a diagonal matrix of the state distribution induced by behavior policy µ, each element of R⇡ is the expected reward under policy ⇡ at the corresponding state. The policy evaluation problem aims to approximate the value function at state s 2 S , v⇡(s) := E ⇥P1 k=0 kr(Sk, Ak, Sk+1) S0 = s,⇡ ⇤ , where the trajectory is generated under policy ⇡ : S ⇥ A ! [0, 1]. In this paper, we consider the linear function approximation to approximate the value function v⇡(s). In particular, we parameterize the value function v⇡(s) with >(s)⇠, where : S ! Rn is a pre-selected feature vector with (s) := [ 1(s) · · · n(s)], 1, . . . , n : S ! R are feature functions, and ⇠ 2 Rn is the learning parameter. The goal of the policy evaluation problem is then to approximate the value function v⇡(s) using this linear parameterization, i.e., >(s)⇠ ⇡ v⇡(s). Moreover, using the matrix notation := [ (1), (2), · · · , (|S|)]> 2 R|S|⇥n, called the feature matrix, the linear parameterization can be written in the vector form ⇠. We also assume that is full column rank matrix throughout the paper, which is a standard assumption (Sutton et al., 2008; 2009; Ghiassian et al., 2020; Lee et al., 2021). 2.5 TEMPORAL DIFFERENCE LEARNING This section provides a brief background on TD-learning (Sutton, 1988). Suppose that we have access to stochastic samples of state sk from the state stationary distribution induced by the behavior policy µ, i.e., sk ⇠ dµ(·), and action is chosen under behavior policy µ, i.e., ak ⇠ µ(·|sk). Then, we observe the next state s0k following s 0 k ⇠ P(·, ak, sk), and receive the reward rk := r(sk, ak, s0k). Using the simplified notations for the feature vectors k := (sk), 0 k = (s 0 k). the TD-learning update at time step k with linear function approximation can be expressed as ⇠k+1 = ⇠k+↵k⇢k k(⇠k) k, where ↵k > 0 is the step-size, k(⇠k) := rk+ 0>k ⇠k >k ⇠k is called the temporal difference or temporal difference error (TD-error), and ⇢k := ⇢(sk, ak) = ⇡(ak|sk) µ(ak|sk) is called the importance sampling ratio (Precup et al., 2001). The importance sampling ratio reweights the TD-error to handle the mismatch between the behavior policy µ and target policy ⇡. It is known that TD-learning with linear function approximation and off-policy learning scheme does not guarantee convergence in general. The above stochastic approximation aims to find fixed point of the following projected Bellman equation, which is, after some manipulations, expressed as: >Dµ ⇠⇤ >DµP⇡ ⇠⇤ = >DµR⇡. (5) To simplify the expressions, let use introduce one more piece of notations: A := Es⇠dµ(s),s0⇠P⇡(s0|s)[ (s)( (s) (s0))>] = >Dµ DµP⇡ 2 Rn⇥n, b := Es⇠dµ(s),a⇠⇡(a|s),s0⇠P (s0|s,a)[r(s, a, s0) (s)] = >DµR⇡ 2 Rn⇥1. Even though we can use arbitrary distribution, for simplicity we assume stationary distribution of µ. Now, we can rewrite (5) compactly as A⇠⇤ = b. (6) The corresponding O.D.E. for TD-learning can be written as ⇠̇t = A⇠t b, ⇠0 2 Rn. Using the coordinate transform xk := ⇠k ⇠⇤, we get the O.D.E. ẋt = Axt, x0 2 Rn, whose origin is globally asymptotically stable equilibrium point if ⇢(s, a) = ⇡(a|s)µ(a|s) = 1 for all (s, a) 2 S ⇥ A. Throughout the paper we will use the vector xk := ⇠k ⇠⇤ to represent the coordinate transform of ⇠k to the origin, and will use ⇠t and xt to denote the corresponding continuous-time counterparts of ⇠k and xk, respectively. 2.6 GRADIENT TEMPORAL DIFFERENCE LEARNING To fix the instability issue of off-policy TD-learning under linear function approximation, Sutton et al. (2008) and Sutton et al. (2009) introduced various stable off-policy TD-learning algorithms, called GTD (gradient TD-learning), GTD2, and TDC (temporal difference correction). The idea behind these algorithms is to minimize the mean-square error of projected Bellman equation (MSPBE) min⇠2Rn 12 || >Dµ(R⇡+ P⇡ ⇠ ⇠)||2( >Dµ ) 1 , where ||x||D := p x>Dx, and the global minimizer of MSPBE corresponds to the solution of (6). The core idea of the algorithms is to introduce an additional variable k 2 Rn to approximate the stochastic gradient descent method for MSPBE as an objective function. In particular, GTD2 update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k, ⇠k+1 = ⇠k + ↵k( >k k k ⇢k >k k 0k). We denote t to denote continuous time part of k. Since the fixed point for k is zero, it doesn’t require coordinate transformation. It is a single time-scale algorithm because it uses a single stepsize ↵k. The corresponding O.D.E. is expressed as ̇t = C t Axt, ẋt = A> t, where C := Es⇠dµ(s)[ (s) >(s)] = >Dµ 2 Rn⇥n. Similarly, TDC update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (7) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (8) where the step-sizes, ↵k and k, satisfy ↵k/ k ! 0 as k ! 1 and the Robbins and Monro step-size condition (Robbins & Monro, 1951) in (33) in Appendix. It is a two time-scale algorithm because it uses two time-steps, ↵k and k. 3 DESIGNING TD-LEARNING THROUGH BACKSTEPPING We briefly explain the motivation for our algorithmic development. Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1 is a typical tool to prove convergence of Qlearning (Borkar & Meyn, 2000; Lee & He, 2019) and TD-learning (Sutton et al., 2009; Lee et al., 2021). Most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) first start with an objective function, and then derive GTD algorithms based on optimization perspectives. Then, the convergence is proved using the corresponding O.D.E. models and stability theory of linear time-invariant systems. A natural question arises is, can we derive off-policy TDlearning algorithms following a reversed step? In other words, can we develop a stable O.D.E. model first using tools in control theory, and then recover back the corresponding off-policy TD-learning algorithms? In this paper, we reveal that a class of off-policy TD-learning algorithms can be derived based on purely control theoretic motivations following such a reversed process. By doing so, this work provides additional insights on off-policy TD-learning algorithms and gives a sound theoretical foundation on off-policy TD-learning algorithms for further developments of new algorithms. Designing stabilizing control laws for continuous-time nonlinear system has been successful over the past decades (Khalil, 2015). One such technique, so called backstepping, is a popular controller design method in non-linear control literature (Khalil, 2015). With the help of the backstepping method (Khalil, 2015), we design stabilizing control laws for continuous-time systems, and then the corresponding off-policy TD-learning algorithms are derived, and are shown to be convergent via Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1. The brief procedure is explained in the following steps: Step 1) Choose an appropriate continuous-time dynamic model such that (a) we can recover the TD-fixed point ⇠⇤ in (6) via its equilibrium point; (b) the corresponding stochastic approximation algorithm can be implementable only through transitions of MDP and accessible data.; Step 2) Using the backstepping method, design a control input to stabilize the dynamic model chosen in Step 1). 3.1 BACKSTEPPING TD Now, we introduce a new off-policy TD-learning algorithm, which we call Backstepping TD (BTD). Firstly, we will develop a stabilizing control law for the following the continuous-time system: ̇t = ( C + ⌘A) t Axt (9) ẋt = ut (10) The idea stems from finding a control system for which we can easily apply the backstepping techinque. In details, the backstepping techinqiue can be applied to the two interconnected systems where one subsystem, namely (4), can be stabilized with xt in (4) as a control input. Therefore, our first aim is to find such a system. To this end, we can try a natural choice of O.D.E. to solve the TD problem, i.e., ̇t = A t, which is however unstable in the off-policy case. Therefore, we can develop a modified O.D.E. ̇t = ( C + ⌘A) t Axt, where xt is the control input, the negative definite matrix C is introduced to stabilize the system, and ⌘ > 0 is introduced to provide additional degrees of freedom in design. Now, the constructed system can be stabilized through the state-feedback controller xt = ⌘ t and admits the simple control Lypaunov function V ( ) = || ||2. Moreover, A should be included in the right-hand side in order to implement the corresponding algorithm without knowing the solution because xk = ⇠k ⇠⇤ and ⇠⇤ should be removed using A⇠⇤ = b in the final step. Simply setting xt = ⌘ t may cancel out A in the right-hand side, the O.D.E. becomes ̇t = C t, Therefore, as mentioned before, we can apply the backstepping technique by adding an additional dynamic controller. As the next step, the backstepping technique is applied, and one needs to observe what would be the final form of the control system. In summary, if we consist f( t) with the combination of A and C (not necessarily C, it may be I) , it can be a reasonable candidate to apply the backstepping technique. Cancelling A with virtual input only leaves C, which guarantees stability from its negative definiteness. Therefore, (9) and (10) is a reasonable candidate for the dynamics where we can apply the backstepping technique. In particular, our aim is to design an appropriate control input ut for the above system such that the origin is the unique asymptotically stable equilibrium point, i.e., ( t, xt) ! 0 as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. The overall procedure is depicted in Figure 1 in the Appendix, and we show how to choose the control input ut in the following lemma. Lemma 3.1. Consider the O.D.E. in (9) and (10). If we choose the control input ut := (A>+⌘2A ⌘C) t ⌘Axt, then the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. Proof sketch. The proof follows the steps given in the backstepping scheme in Section 3. First, substituting xt in (9) with a virtual controller x̃( t), we will design a control law x̃( t) that stabilizes the following new virtual system: ̇t = ( C + ⌘A) t Ax̃( t). (11) One natural choice of the virtual controller is x̃( t) = ⌘ t. Plugging it into (11) leads to ̇t = C t, and we can verify the global asymptotic stability of the above system with the following Lyapunov function: V ( t) := || t||22 2 . (12) We now consider the original O.D.E. in (9) and (10). Applying simple algebraic manipulations yield ̇t = C t A(xt ⌘ t), ẋt = ut. The error between xt and the virtual controller x̃( t) can be expressed as new variable zt, which is zt := xt x̃( t) = xt ⌘ t. Rewriting the O.D.E. in (9) and (10) with ( t, zt) coordinates, we have ̇t = C t Azt (13) żt = ut + ⌘C t + ⌘Azt. To prove the global asymptotic stability of the above system, consider the function Vc( t, zt) := V ( t)+ 1 2 ||zt|| 2 2 where V ( t) is defined in (12). By taking ut as ut = A> t ⌘C t ⌘Azt, we can apply LaSall’es invariance principle in Lemma 7.1. The full proof is in Appendix Section 7.4.1. Using the relation zt := xt ⌘ t, the control input in the original coordinate ( t, xt) can be written as ut := A> t ⌘C t ⌘Azt = (A> + ⌘2A ⌘C) t ⌘Axt. Plugging this input into the original open-loop system in (9) and (10), the closed-loop system in the original coordinate ( t, xt) can written as ̇t = ( C + ⌘A) t Axt (14) ẋt = (A > + ⌘2A ⌘C) t ⌘Axt, (15) whose origin is also globally asymptotically stable according to Lemma 3.1. Recovering back from xt to ⇠t, we have ddt t ⇠t = C + ⌘A A A> + ⌘2A ⌘C ⌘A t ⇠t + b ⌘b . The corresponding stochastic approximation of the O.D.E. in Theorem 3.1 becomes k+1 = k + ↵k((( 1 + ⌘) >k ⌘⇢k 0>k ) k + ⇢k k(⇠k)) k (16) ⇠k+1 = ⇠k + ↵k((( ⌘ + ⌘2) >k ⌘2⇢k 0>k ) k k + ⌘⇢k k(⇠k) k + ( >k k k ⇢k >k k 0k)). (17) The equilibrium point of the above O.D.E. is (0, ⇠⇤). Hence, we only need to transform the coordinate of ⇠t to xt = ⇠t ⇠⇤, which results to the O.D.E. in (14) and (15). With the above result, we are now ready to prove convergence of Algorithm 1. The proof simply follows from Borkar and Meyn theorem in Lemma 2.1, of which the details can be found in Sutton et al. (2009). Theorem 3.1. Under the step size condition (33) , with Algorithm 1 in Appendix, ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). Proof. The proof is done by checking Assumption 7.1 in Appendix. Remark 3.1. Theorem 3.1 doesn’t require any condition on ⌘. Therefore, we can set ⌘ = 0, which results to GTD2 developed in Sutton et al. (2009). 3.2 RECOVERING SINGLE TIME-SCALE TDC In this section, we derive a single-time scale version of TDC (Sutton et al., 2009) through the backstepping design in the previous section. TDC (Sutton et al., 2009) was originally developed as a two-time scale algorithm in Sutton et al. (2009). Even though the two time-scale method provides theoretical guarantee for a larger class of algorithms, the single time-scale scheme provides more simplicity in practice, and shows faster convergence empirically. Subsequently, Maei (2011) provided a single-time scale version of TDC by multiplying a large enough constant ⌘ > 0 to the faster time scale part (7), which leads to k+1 = k + k⌘( >k k + ⇢k k(⇠k)) k (18) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (19) where ⌘ > max 0, min C 1(A+A>)/2 . (20) Here, we derive another version of single-time TDC by multiplying a constant to the slower timescale part in (8), which results in k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (21) ⇠k+1 = ⇠k + ↵k ( > k k k ⇢k >k k 0k + ⇢k k(⇠k) k), (22) where satisfies 0 < < min(C) min(A) if min(A) < 0, else > 0. (23) We can derive the above algorithm following similar steps as in Section 3.1. Let us first consider the following dynamic model: ̇t = C t Axt (24) ẋt = ut (25) Using the backstepping technique, we can prove that the above system admits the origin as a global asymptotically stable equilibrium point with the control input ut := (A> C) t A⇠t , which is shown in the following lemma: Lemma 3.2. Consider the O.D.E. in (24) and (25). Suppose that we choose the control input ut := (A> C) t A⇠t ), and satisfies condition (23). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof of Lemma 3.2 is given in Appendix Section 7.4.2. By Borkar and Meyn theorem in Lemma 2.1, we can readily prove the convergence of Algorithm 2 in Appendix, which uses stochastic recursive update (21) and (22). Theorem 3.2. Consider Algorithm 2 in Appendix. Under the step size condition (33), and if satisfies (23), ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). We will call the Algorithm 4 as TDC-slow, and single-time version of TDC suggested by Maei (2011) as TDC-fast. Other than the multiplication of a constant reflecting two-time scale property, we can make TDC into a single-time algorithm, which we call a single time-scale TDC2, while the original version in Maei (2011) will be called the single time-scale TDC. The derivation is given in Appendix Section 7.5. The performance of such versions of TDC are evaluated in Appendix Section 7.9.1. Even though not one of the algorithms outperforms each other, TDC-slow and TDC2 shows better performance in general. 3.3 GENERALIZING TDC++ This section provides versions of TDC++ (Ghiassian et al., 2020), which is variant of TDC. With an additional regularization term ⇠k on both updates of TDC in (7) and (8), the update is written as follows: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (26) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k k + ⇢k k(⇠k) k), (27) where ⌘ > 0 satisfies (20) and > 0 is a new parameter. Note that TDC++ can be simply viewed as variant of TDC by adding the term k in the update, which can be seen as a regularization term. Therefore, letting = 0 yields the original TDC. In this paper, we prove that our controller design leads to the following update: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (28) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k + (1 ⌘) >k k k ⌘ k + ⇢k⌘ k(⇠k) k), (29) where and are new parameters and when = 1/⌘ it becomes TDC++. The difference with the original TDC++ can be seen in their corresponding O.D.E. forms. The corresponding O.D.E. for (26) and (27) (original TDC++) can be expressed as: ddt t xt = ⌘(C + I) ⌘A A> C I A t xt . Meanwhile, the O.D.E. corresponding to (28) and (29) (new TDC++) becomes ddt t xt = ⌘(C + I) ⌘A A> ⌘(C + I) ⌘A t xt . We experiment under different of and ⌘ to examine the behavior of new TDC++. The result shows that in general, smaller leads to better performance. The results are given in Appendix Section 7.9. Lemma 3.3. Consider the following O.D.E.: ̇t = ⌘(C + I) t ⌘Axt (30) ẋt = ut. (31) Suppose that we choose the control input ut := (A> ⌘(C + I)) t ⌘Axt. Assume ⌘ > 0 and and satisfies the following condition: + min(A) > min(C). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof is given in Appendix Section 7.4.3. With Lemma 2.1, we can prove the convergence of stochastic update with (28) and (29) whose pseudo code is given in Algorithm 5 in Appendix. Theorem 3.3. Consider Algorithm 5 in Appendix. Under the step-size condition (33) and if ⌘ satisfies (20), then ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the TD fixed point in (6). Remark 3.2. We can replace the regularization term with nonlinear terms satisfying certain condi- tions. The details are given in Appendix Section 7.6. 4 EXPERIMENTS We verify the performance and convergence of the proposed BTD under standard benchmarks to evaluate off-policy TD-learning algorithms, including Baird environment (Baird, 1995), RandomWalk (Sutton et al., 2009) with different features, and Boyan chain (Boyan, 2002). The details about the environments are given in Appendix Section 7.7. From the experiments, we see how BTD behaves under different coefficients ⌘ 2 { 0.5, 0.25, 0, 0.25, 0.5}. We measure the Root Mean-Squared Projected Bellman Error (RMSPBE) as the performance metric, and every results are averaged over 100 runs. From Table 1, the result with ⌘ = 0.5 shows the best performance except at Baird, where ⌘ = 0, corresponding to GTD2 performs best. There exist two aspects on the role of ⌘. First of all, it can be thought of as a parameter that can mitigate the effect of instability coming from matrix A in (9). For example, a smaller ⌘ can stabilize the system. However, as a trade off, if ⌘ is too small, then the update rate might be too small as well. As a result, the overall convergence can be slower. Furthermore, ⌘ also controls the effect of C in (13) in the BTD update rules, where C corresponds to ( ⌘ + ⌘2) >k k k in (17). Note that the role of ⌘ in the final BTD update rule in (17) shows different perspectives compared to that in (9). In particular, ⌘ = 1/2 maximizes the effect of C in (17). From Table 1, it leads to reasonably good performances in most domains. Another natural choice is to multiply ⌘ to C instead of A. However, in such cases, we need to introduce another constrain ⌘ > 0, whereas in the current BTD, convergence is guaranteed for all ⌘ 2 R. Finally, we note that simply multiplying C by a large positive constant does not lead to good results in general. This is because in this case, it may increase variance, and destabilize the algorithm. Overall results are given in Appendix Section 7.8. 5 CONCLUSION In this work, we have proposed a new framework to design off-policy TD-learning algorithms from control-theoretic view. Future research directions would be extending the framework to non-linear function approximation setting. 6 ACKNOWLEDGEMENTS This work was supported by the National Research Foundation under Grant NRF2021R1F1A1061613, Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT)(No.2022-0-00469), and the BK21 FOUR from the Ministry of Education (Republic of Korea). (Corresponding author: Donghwan Lee.)
1. What is the focus of the paper regarding TD-learning? 2. What are the strengths of the proposed approach, particularly in terms of stability and unifying perspective? 3. What are the weaknesses of the paper, especially regarding the intuition and implementation of the new algorithm? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies TD-learning for a non-linear control theory perspective. Motivated by the idea of designing stabilizing control policies for non-linear ODE systems based on the control Lyapunov function method, the paper studies TD-learning as a special case. The paper designs a novel TD-learning algorithm that is stable under linear function approximation in the off-policy learning setting, and provides extensions to GTD2 and TDC. The paper is mostly theoretical and uses a few experiments to illustrate properties of the new algorithm. Strengths And Weaknesses === Strength === The paper seems to provide a novel and unifying perspective on viewing various TD-learning algorithms under linear function approximations in the off-policy learning setting. Prior work such as GTD and TDC, which have been motivated to bypass the instability of vanilla TD-learning, can be understood as special instances under the new framework proposed in this paper. The paper also suggests a new stable TD-learning algorithm, which is quite interesting in its own right. === Weakness === Overall, the intuition of the new algorithm is not very clear, despite the fact that it has been derived based on control Lyapunov function and leads to stable learning, and the fact that it can be implemented as a single time scale algorithm. The paper can be greatly improved if further intuitions and explanations of the new algorithm can be provided, beyond control-theoretic mathematical aspect of the algorithm. The experiment section of the paper is also quite simple, making it more difficult to assess the advantage of the new algorithm. Clarity, Quality, Novelty And Reproducibility === Clarity === The paper is written clearly overall. Presentation-wise, certain equations can be presented in a more visually clear manner. General readers will also appreciate more in-text explanations behind the equations. === Quality === The paper has relatively high technical quality. === Novelty === The paper is novel as far as I can see, in that it provides a novel application of non-linear control theory to the TD-learning case and has managed to derive a new algorithm too. === Reproduce === Experiment results are fairly simple and should be easily reproducible.
ICLR
Title Backstepping Temporal Difference Learning Abstract Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning with correction (TDC), have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective, and propose a new convergent algorithm. Our method relies on the backstepping technique, which is widely used in nonlinear control theory. Finally, convergence of the proposed algorithm is experimentally verified in environments where the standard TD-learning is known to be unstable. 1 INTRODUCTION Since Mnih et al. (2015), which has demonstrated that deep reinforcement learning (RL) outperforms human in several video games (Atari 2600 games), significant advances has been made in RL theory and algorithms. For instance, Van Hasselt et al. (2016); Lan et al. (2020); Chen et al. (2021) proposed some variants of the so-called deep Q-network (Mnih et al., 2015) that achieves higher scores in Atari games than the original deep Q-network. An improved deep RL was developed in Badia et al. (2020) that performs better than average human scores across 57 Atari games. Not only performing well in video games, but Schrittwieser et al. (2020) also have shown that an RL agent can self-learn chess, Go, and Shogi. Furthermore, RL has shown great success in real world applications, e.g., robotics (Kober et al., 2013), healthcare (Gottesman et al., 2019), and recommendation systems (Chen et al., 2019). Despite the practical success of deep RL, there is still a gap between theory and practice. One of the notorious phenomena is the deadly triad (Sutton & Barto, 2018), the diverging issue of the algorithm when function approximation, off-policy learning, and bootstrapping are used together. One of the most fundamental algorithms, the so-called temporal-difference (TD) learning (Sutton, 1988), is known to diverge under the deadly triad, and several works have tried to fix this issue for decades. In particular, the seminar works Sutton et al. (2008; 2009) introduced the so-called GTD, gradient-TD2 (GTD2), and TDC, which are off-policy, and have been proved to be convergent with linear function approximation. More recently, Ghiassian et al. (2020) suggested regularized version of TDC called TD learning with regularized correction (TDRC), and showed its favorable features under off-policy settings. Moreover, Lee et al. (2021) developed several variants of GTD based on primal dual formulation. On the other hand, backstepping control (Khalil, 2015) is a popular method in designing stable controllers for nonlinear systems with special structures. The design technique offers a wide range of stable controllers, and is proved to be robust under various settings. It has been used in various fields including quadrotor helicopters (Madani & Benallegue, 2006), mobile robots (Fierro & Lewis, 1997), and ship control (Fossen & Strand, 1999). Using backstepping control technique, in this paper, we develop a new convergent off-policy TD-learning which is a single time-scale algorithm. In particular, the goal of this paper is to introduce a new unifying framework to design off-policy TDlearning algorithms under linear function approximation. The main contributions are summarized as follows: • We propose a systemic way to generate off-policy TD-learning algorithms including GTD2 and TDC from control theoretic perspective. • Using our framework, we derive a new TD-learning algorithm, which we call backstepping TD (BTD). • We experimentally verify its convergence and performance under various settings including where off-policy TD has known to be unstable. In particular, most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) are derived based on optimization perspectives starting with an objective function. Then, the convergence is proved by proving stability of the corresponding O.D.E. models. In this paper, we follow reversed steps, and reveal that an off-policy TD-learning algorithm (called backstepping TD) can be derived based on control theoretic motivations. In particular, we develop stable O.D.E. models first using the backstepping technique, and then recover back the corresponding off-policy TD-learning algorithms. The new analysis reveals connections between off-policy TD-learning and notions in control theory, and provides additional insights on off-policy TD-learning with simple concepts in control theory. This sound theoretical foundation established in this paper can potentially motivate further analysis and developments of new algorithms. Finally, we briefly summarize TD learning algorithms that guarantee convergence under linear function approximation. GTD (Sutton et al., 2008), GTD2 and TDC (Sutton et al., 2009) have been developed to approximate gradient on mean squared projected Belllman error. Later, GTD and GTD2 has been discovered to solve minimax optimization problem (Macua et al., 2014; Liu et al., 2020). Such sadde-point view point of GTD has led to many interesting results including Du et al. (2017); Dai et al. (2018); Lee et al. (2021). TDRC (Ghiassian et al., 2020) adds an additional term similar to regularization term to one-side of parameter update, and tries to balance between the performance of TD and stability of TDC. TDC++ (Ghiassian et al., 2020) also adds an additional regularization term on both sides of the parameter update. Even though TDRC shows good performance, it uses additional parameter condition to ensure convergence, whereas TDC++ does not. 2 PRELIMINARIES 2.1 NONLINEAR SYSTEM THEORY Nonlinear system theory will play an important role throughout this paper. Here, we briefly review basics of nonlinear systems. Let us consider the continuous-time nonlinear system ẋt = f(xt, ut), x0 2 Rn, (1) where x0 2 Rn is the initial state, t 2 R, t 0 is the time, xt 2 Rn is the state, ut 2 Rn is the control input, and f : Rn ⇥ Rn ! Rn is a nonlinear mapping. An important concept in dealing with nonlinear systems is the equilibrium point. Considering the state-feedback law ut = µ(xt), the system can be written as ẋt = f(xt, ut) = f(xt, µ(xt)) =: f(xt), and a point x = xe in the state-space is said to be an equilibrium point of (1) if it has the property that whenever the state of the system starts at xe, it will remain at xe (Khalil, 2015). For ẋt = f(xt), the equilibrium points are the real roots of the equation f(x) = 0. The equilibrium point xe is said to be globally asymptotically stable if for any initial state x0 2 Rn, xt ! xe as t ! 1. An important control design problem is to construct a state-feedback law ut = µ(xt) such that the origin becomes the globally asymptotically stable equilibrium point of (1). To design a statefeedback law to meet such a goal, control Lyapunov function plays a central role, which is defined in the following definition. Definition 2.1 (Control Lyapunov function (Sontag, 2013)). A positive definite function V : Rn ! R is called a control Lyapunov function (CLF) if for all x 6= 0, there exists a corresponding control input u 2 Rm that satisfies the inequality, rxV (x)>f(x, u) < 0 for all x 6= 0. Once such a CLF is found, then it guarantees that there exists the control law that stabilizes the system. Moreover, the corresponding state-feedback control law can be extracted from the CLF, e.g., µ(x) = argminu rxV (x)>f(x, u) provided that the minimum exists and unique. The concept of control Lyapunov function will be used in the derivations of our main results. For the autonomous system, ẋt = f(xt), and Lypaunov function V : Rn ! R, Lie derivative is defined as LfV (x) := rxV (x)>f(x) so that V̇ (xt) = LfV (xt) along the solution. 2.2 STOCHASTIC APPROXIMATION AND O.D.E. APPROACH Including Q-learning (Watkins & Dayan, 1992) and TD-learning (Sutton, 1988), reinforcement learning algorithms can be considered as stochastic approximation (Robbins & Monro, 1951) described by xk+1 = xk + ↵k(f(xk) + ✏k), (2) where f : Rn ! Rn is a nonlinear mapping, and ✏k is an i.i.d. noise. Borkar and Meyn theorem (Borkar & Meyn, 2000) is a well-known method to bridge the asymptotic convergence of stochastic approximation and the stability of its corresponding O.D.E. model, which can be expressed as ẋt = f(xt), x0 2 Rn, (3) where x0 2 Rn is initial state, and t 2 R, t 0 is the time. Borkar and Meyn theorem (Borkar & Meyn, 2000) states that under the conditions in Assumption 7.1 in the Appendix, global asymptotic stability of the O.D.E. (3) leads to asymptotic convergence of the stochastic approximation update (2), which is formally stated in the following lemma. Lemma 2.1 (Borkar and Meyn theorem (Borkar & Meyn, 2000)). Suppose that Assumption 7.1 in the Appendix holds, and consider the stochastic approximation in (2). Then, for any initial x0 2 Rn, supk 0 ||xk|| < 1 with probability one. In addition , xk ! xe as k ! 1 with probability one, where xe is the unique equilibrium point of the O.D.E. in (3). The main idea of Borkar and Meyn theorem is as follows: iterations of a stochastic recursive algorithm follow the solution of its corresponding O.D.E. in the limit when the step-size satisfies the so-called Robbins-Monro condition (Robbins & Monro, 1951) in (33) in the Appendix. Therefore, by proving asymptotic stability of the O.D.E., we can induce convergence of the original algorithm. In this paper, we will use an O.D.E. model of TD-learning, which is expressed as a linear timeinvariant system. 2.3 BACKSTEPPING CONTROL This section provides the concept of the backstepping control (Kokotovic, 1992; Khalil, 2015), which will be the main tool in this paper to derive TD-learning algorithms. The backstepping technique is a popular tool for generating a CLF (control Lyapunov function) for nonlinear systems with specific structures. In particular, let us start with the following general nonlinear system: ẏt = f(yt) + g(yt)xt (4) ẋt = ut, where yt 2 Rm, xt 2 Rm are the states, ut 2 Rm is the input, and f : Rm ! Rm and g : Rm ! R are continuous functions. The first system is a nonlinear system with a particular affine structure, and the second system is simply an integrator. It can be seen as a cascade interconnection of two systems, where the second system’s state is injected to the input of the first system. The backstepping control technique gives us a systematic way to generate a CLF for such particular nonlinear systems provided that the first system admits a CLF independently. To this end, we suppose that the first system admits a CLF. Through the backstepping approach, designing a stable control law for the above system can be summarized in the following steps: Step 1. Consider xt in (4) as virtual input x̃(yt) (state-feedback controller), and consider the following system: ̇t = f(yt) + g(yt)x̃(yt). Design x̃(yt) such that the above system admits a CLF V , i.e., it admits a positive definite and radially unbounded function V such that its time derivative is negative definite, i.e.,V̇ (yt) < 0, 8yt 6= 0. Step 2. Denote the error between the virtual state-feedback controller x̃(yt) and state variable xt as zt := xt x̃(yt). Now, rewrite the original O.D.E. in (4) with the new variable (yt, zt): d dt yt zt = f(yt) + g(yt)x̃(yt) + g(yt)zt ut ˙̃x(yt) Step 3. Design the control input ut such that the above system is stable. One popular choice is to consider the CLF Vc(yt, zt) := V (yt) + ||zt||2/2, where V (yt) is defined in Step 1. Then choose ut such that the time derivative of Vc(yt, zt) to be negative definite. A simple example of designing stabilizing control law by backstepping technique is given in Appendix Section 7.3. 2.4 MARKOV DECISION PROCESS In this paper, we consider a Markov decision process (MDP) characterized by the tuple (S,A,P, , r), where S := {1, 2, . . . , |S|} stands for the set of finite state space, |S| denotes the size of S , A := {1, 2, . . . , |A|} denotes the set of finite action space, |A| is the size of A, 2 (0, 1) is the discount factor, P : S ⇥ A ⇥ S ! [0, 1] denotes the Markov transition kernel, and r : S ⇥A ⇥ S ! R means the reward function. In particular, if an agent at state s 2 S , takes action a 2 A, then the current state transits to the next state s0 2 S with probability P(s, a, s0), and the agent receives reward r(s, a, s0). Each element of the state to state transition matrix under policy ⇡, denoted by P⇡ 2 R|S|⇥|S| is [P⇡]ij := P a2A ⇡(a|i)P(i, a, j), 1 i, j |S|, where [P⇡]ij corresponds to i-th row and j-th column element of matrix P⇡ . Moreover, the stationary state distribution induced by policy µ, is denoted as dµ : S ! [0, 1], i.e., dµ>Pµ = dµ>. With the above setup, we define the following matrix notations: Dµ := 2 64 dµ(1) . . . dµ(|S|) 3 75 2 R|S|⇥|S|, R⇡ = 2 664 Ea⇠⇡[r(s, a, s0)|s = 1] Ea⇠⇡[r(s, a, s0)|s = 2] ... Ea⇠⇡[r(s, a, s0)|s = |S|] 3 775 2 R |S|, where Dµ is a diagonal matrix of the state distribution induced by behavior policy µ, each element of R⇡ is the expected reward under policy ⇡ at the corresponding state. The policy evaluation problem aims to approximate the value function at state s 2 S , v⇡(s) := E ⇥P1 k=0 kr(Sk, Ak, Sk+1) S0 = s,⇡ ⇤ , where the trajectory is generated under policy ⇡ : S ⇥ A ! [0, 1]. In this paper, we consider the linear function approximation to approximate the value function v⇡(s). In particular, we parameterize the value function v⇡(s) with >(s)⇠, where : S ! Rn is a pre-selected feature vector with (s) := [ 1(s) · · · n(s)], 1, . . . , n : S ! R are feature functions, and ⇠ 2 Rn is the learning parameter. The goal of the policy evaluation problem is then to approximate the value function v⇡(s) using this linear parameterization, i.e., >(s)⇠ ⇡ v⇡(s). Moreover, using the matrix notation := [ (1), (2), · · · , (|S|)]> 2 R|S|⇥n, called the feature matrix, the linear parameterization can be written in the vector form ⇠. We also assume that is full column rank matrix throughout the paper, which is a standard assumption (Sutton et al., 2008; 2009; Ghiassian et al., 2020; Lee et al., 2021). 2.5 TEMPORAL DIFFERENCE LEARNING This section provides a brief background on TD-learning (Sutton, 1988). Suppose that we have access to stochastic samples of state sk from the state stationary distribution induced by the behavior policy µ, i.e., sk ⇠ dµ(·), and action is chosen under behavior policy µ, i.e., ak ⇠ µ(·|sk). Then, we observe the next state s0k following s 0 k ⇠ P(·, ak, sk), and receive the reward rk := r(sk, ak, s0k). Using the simplified notations for the feature vectors k := (sk), 0 k = (s 0 k). the TD-learning update at time step k with linear function approximation can be expressed as ⇠k+1 = ⇠k+↵k⇢k k(⇠k) k, where ↵k > 0 is the step-size, k(⇠k) := rk+ 0>k ⇠k >k ⇠k is called the temporal difference or temporal difference error (TD-error), and ⇢k := ⇢(sk, ak) = ⇡(ak|sk) µ(ak|sk) is called the importance sampling ratio (Precup et al., 2001). The importance sampling ratio reweights the TD-error to handle the mismatch between the behavior policy µ and target policy ⇡. It is known that TD-learning with linear function approximation and off-policy learning scheme does not guarantee convergence in general. The above stochastic approximation aims to find fixed point of the following projected Bellman equation, which is, after some manipulations, expressed as: >Dµ ⇠⇤ >DµP⇡ ⇠⇤ = >DµR⇡. (5) To simplify the expressions, let use introduce one more piece of notations: A := Es⇠dµ(s),s0⇠P⇡(s0|s)[ (s)( (s) (s0))>] = >Dµ DµP⇡ 2 Rn⇥n, b := Es⇠dµ(s),a⇠⇡(a|s),s0⇠P (s0|s,a)[r(s, a, s0) (s)] = >DµR⇡ 2 Rn⇥1. Even though we can use arbitrary distribution, for simplicity we assume stationary distribution of µ. Now, we can rewrite (5) compactly as A⇠⇤ = b. (6) The corresponding O.D.E. for TD-learning can be written as ⇠̇t = A⇠t b, ⇠0 2 Rn. Using the coordinate transform xk := ⇠k ⇠⇤, we get the O.D.E. ẋt = Axt, x0 2 Rn, whose origin is globally asymptotically stable equilibrium point if ⇢(s, a) = ⇡(a|s)µ(a|s) = 1 for all (s, a) 2 S ⇥ A. Throughout the paper we will use the vector xk := ⇠k ⇠⇤ to represent the coordinate transform of ⇠k to the origin, and will use ⇠t and xt to denote the corresponding continuous-time counterparts of ⇠k and xk, respectively. 2.6 GRADIENT TEMPORAL DIFFERENCE LEARNING To fix the instability issue of off-policy TD-learning under linear function approximation, Sutton et al. (2008) and Sutton et al. (2009) introduced various stable off-policy TD-learning algorithms, called GTD (gradient TD-learning), GTD2, and TDC (temporal difference correction). The idea behind these algorithms is to minimize the mean-square error of projected Bellman equation (MSPBE) min⇠2Rn 12 || >Dµ(R⇡+ P⇡ ⇠ ⇠)||2( >Dµ ) 1 , where ||x||D := p x>Dx, and the global minimizer of MSPBE corresponds to the solution of (6). The core idea of the algorithms is to introduce an additional variable k 2 Rn to approximate the stochastic gradient descent method for MSPBE as an objective function. In particular, GTD2 update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k, ⇠k+1 = ⇠k + ↵k( >k k k ⇢k >k k 0k). We denote t to denote continuous time part of k. Since the fixed point for k is zero, it doesn’t require coordinate transformation. It is a single time-scale algorithm because it uses a single stepsize ↵k. The corresponding O.D.E. is expressed as ̇t = C t Axt, ẋt = A> t, where C := Es⇠dµ(s)[ (s) >(s)] = >Dµ 2 Rn⇥n. Similarly, TDC update can be written as k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (7) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (8) where the step-sizes, ↵k and k, satisfy ↵k/ k ! 0 as k ! 1 and the Robbins and Monro step-size condition (Robbins & Monro, 1951) in (33) in Appendix. It is a two time-scale algorithm because it uses two time-steps, ↵k and k. 3 DESIGNING TD-LEARNING THROUGH BACKSTEPPING We briefly explain the motivation for our algorithmic development. Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1 is a typical tool to prove convergence of Qlearning (Borkar & Meyn, 2000; Lee & He, 2019) and TD-learning (Sutton et al., 2009; Lee et al., 2021). Most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) first start with an objective function, and then derive GTD algorithms based on optimization perspectives. Then, the convergence is proved using the corresponding O.D.E. models and stability theory of linear time-invariant systems. A natural question arises is, can we derive off-policy TDlearning algorithms following a reversed step? In other words, can we develop a stable O.D.E. model first using tools in control theory, and then recover back the corresponding off-policy TD-learning algorithms? In this paper, we reveal that a class of off-policy TD-learning algorithms can be derived based on purely control theoretic motivations following such a reversed process. By doing so, this work provides additional insights on off-policy TD-learning algorithms and gives a sound theoretical foundation on off-policy TD-learning algorithms for further developments of new algorithms. Designing stabilizing control laws for continuous-time nonlinear system has been successful over the past decades (Khalil, 2015). One such technique, so called backstepping, is a popular controller design method in non-linear control literature (Khalil, 2015). With the help of the backstepping method (Khalil, 2015), we design stabilizing control laws for continuous-time systems, and then the corresponding off-policy TD-learning algorithms are derived, and are shown to be convergent via Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1. The brief procedure is explained in the following steps: Step 1) Choose an appropriate continuous-time dynamic model such that (a) we can recover the TD-fixed point ⇠⇤ in (6) via its equilibrium point; (b) the corresponding stochastic approximation algorithm can be implementable only through transitions of MDP and accessible data.; Step 2) Using the backstepping method, design a control input to stabilize the dynamic model chosen in Step 1). 3.1 BACKSTEPPING TD Now, we introduce a new off-policy TD-learning algorithm, which we call Backstepping TD (BTD). Firstly, we will develop a stabilizing control law for the following the continuous-time system: ̇t = ( C + ⌘A) t Axt (9) ẋt = ut (10) The idea stems from finding a control system for which we can easily apply the backstepping techinque. In details, the backstepping techinqiue can be applied to the two interconnected systems where one subsystem, namely (4), can be stabilized with xt in (4) as a control input. Therefore, our first aim is to find such a system. To this end, we can try a natural choice of O.D.E. to solve the TD problem, i.e., ̇t = A t, which is however unstable in the off-policy case. Therefore, we can develop a modified O.D.E. ̇t = ( C + ⌘A) t Axt, where xt is the control input, the negative definite matrix C is introduced to stabilize the system, and ⌘ > 0 is introduced to provide additional degrees of freedom in design. Now, the constructed system can be stabilized through the state-feedback controller xt = ⌘ t and admits the simple control Lypaunov function V ( ) = || ||2. Moreover, A should be included in the right-hand side in order to implement the corresponding algorithm without knowing the solution because xk = ⇠k ⇠⇤ and ⇠⇤ should be removed using A⇠⇤ = b in the final step. Simply setting xt = ⌘ t may cancel out A in the right-hand side, the O.D.E. becomes ̇t = C t, Therefore, as mentioned before, we can apply the backstepping technique by adding an additional dynamic controller. As the next step, the backstepping technique is applied, and one needs to observe what would be the final form of the control system. In summary, if we consist f( t) with the combination of A and C (not necessarily C, it may be I) , it can be a reasonable candidate to apply the backstepping technique. Cancelling A with virtual input only leaves C, which guarantees stability from its negative definiteness. Therefore, (9) and (10) is a reasonable candidate for the dynamics where we can apply the backstepping technique. In particular, our aim is to design an appropriate control input ut for the above system such that the origin is the unique asymptotically stable equilibrium point, i.e., ( t, xt) ! 0 as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. The overall procedure is depicted in Figure 1 in the Appendix, and we show how to choose the control input ut in the following lemma. Lemma 3.1. Consider the O.D.E. in (9) and (10). If we choose the control input ut := (A>+⌘2A ⌘C) t ⌘Axt, then the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1 for any ( 0, x0) 2 Rn ⇥ Rn. Proof sketch. The proof follows the steps given in the backstepping scheme in Section 3. First, substituting xt in (9) with a virtual controller x̃( t), we will design a control law x̃( t) that stabilizes the following new virtual system: ̇t = ( C + ⌘A) t Ax̃( t). (11) One natural choice of the virtual controller is x̃( t) = ⌘ t. Plugging it into (11) leads to ̇t = C t, and we can verify the global asymptotic stability of the above system with the following Lyapunov function: V ( t) := || t||22 2 . (12) We now consider the original O.D.E. in (9) and (10). Applying simple algebraic manipulations yield ̇t = C t A(xt ⌘ t), ẋt = ut. The error between xt and the virtual controller x̃( t) can be expressed as new variable zt, which is zt := xt x̃( t) = xt ⌘ t. Rewriting the O.D.E. in (9) and (10) with ( t, zt) coordinates, we have ̇t = C t Azt (13) żt = ut + ⌘C t + ⌘Azt. To prove the global asymptotic stability of the above system, consider the function Vc( t, zt) := V ( t)+ 1 2 ||zt|| 2 2 where V ( t) is defined in (12). By taking ut as ut = A> t ⌘C t ⌘Azt, we can apply LaSall’es invariance principle in Lemma 7.1. The full proof is in Appendix Section 7.4.1. Using the relation zt := xt ⌘ t, the control input in the original coordinate ( t, xt) can be written as ut := A> t ⌘C t ⌘Azt = (A> + ⌘2A ⌘C) t ⌘Axt. Plugging this input into the original open-loop system in (9) and (10), the closed-loop system in the original coordinate ( t, xt) can written as ̇t = ( C + ⌘A) t Axt (14) ẋt = (A > + ⌘2A ⌘C) t ⌘Axt, (15) whose origin is also globally asymptotically stable according to Lemma 3.1. Recovering back from xt to ⇠t, we have ddt t ⇠t = C + ⌘A A A> + ⌘2A ⌘C ⌘A t ⇠t + b ⌘b . The corresponding stochastic approximation of the O.D.E. in Theorem 3.1 becomes k+1 = k + ↵k((( 1 + ⌘) >k ⌘⇢k 0>k ) k + ⇢k k(⇠k)) k (16) ⇠k+1 = ⇠k + ↵k((( ⌘ + ⌘2) >k ⌘2⇢k 0>k ) k k + ⌘⇢k k(⇠k) k + ( >k k k ⇢k >k k 0k)). (17) The equilibrium point of the above O.D.E. is (0, ⇠⇤). Hence, we only need to transform the coordinate of ⇠t to xt = ⇠t ⇠⇤, which results to the O.D.E. in (14) and (15). With the above result, we are now ready to prove convergence of Algorithm 1. The proof simply follows from Borkar and Meyn theorem in Lemma 2.1, of which the details can be found in Sutton et al. (2009). Theorem 3.1. Under the step size condition (33) , with Algorithm 1 in Appendix, ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). Proof. The proof is done by checking Assumption 7.1 in Appendix. Remark 3.1. Theorem 3.1 doesn’t require any condition on ⌘. Therefore, we can set ⌘ = 0, which results to GTD2 developed in Sutton et al. (2009). 3.2 RECOVERING SINGLE TIME-SCALE TDC In this section, we derive a single-time scale version of TDC (Sutton et al., 2009) through the backstepping design in the previous section. TDC (Sutton et al., 2009) was originally developed as a two-time scale algorithm in Sutton et al. (2009). Even though the two time-scale method provides theoretical guarantee for a larger class of algorithms, the single time-scale scheme provides more simplicity in practice, and shows faster convergence empirically. Subsequently, Maei (2011) provided a single-time scale version of TDC by multiplying a large enough constant ⌘ > 0 to the faster time scale part (7), which leads to k+1 = k + k⌘( >k k + ⇢k k(⇠k)) k (18) ⇠k+1 = ⇠k + k( ⇢k >k k 0k + ⇢k k(⇠k) k), (19) where ⌘ > max 0, min C 1(A+A>)/2 . (20) Here, we derive another version of single-time TDC by multiplying a constant to the slower timescale part in (8), which results in k+1 = k + ↵k( >k k + ⇢k k(⇠k)) k (21) ⇠k+1 = ⇠k + ↵k ( > k k k ⇢k >k k 0k + ⇢k k(⇠k) k), (22) where satisfies 0 < < min(C) min(A) if min(A) < 0, else > 0. (23) We can derive the above algorithm following similar steps as in Section 3.1. Let us first consider the following dynamic model: ̇t = C t Axt (24) ẋt = ut (25) Using the backstepping technique, we can prove that the above system admits the origin as a global asymptotically stable equilibrium point with the control input ut := (A> C) t A⇠t , which is shown in the following lemma: Lemma 3.2. Consider the O.D.E. in (24) and (25). Suppose that we choose the control input ut := (A> C) t A⇠t ), and satisfies condition (23). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof of Lemma 3.2 is given in Appendix Section 7.4.2. By Borkar and Meyn theorem in Lemma 2.1, we can readily prove the convergence of Algorithm 2 in Appendix, which uses stochastic recursive update (21) and (22). Theorem 3.2. Consider Algorithm 2 in Appendix. Under the step size condition (33), and if satisfies (23), ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the fixed point of (6). We will call the Algorithm 4 as TDC-slow, and single-time version of TDC suggested by Maei (2011) as TDC-fast. Other than the multiplication of a constant reflecting two-time scale property, we can make TDC into a single-time algorithm, which we call a single time-scale TDC2, while the original version in Maei (2011) will be called the single time-scale TDC. The derivation is given in Appendix Section 7.5. The performance of such versions of TDC are evaluated in Appendix Section 7.9.1. Even though not one of the algorithms outperforms each other, TDC-slow and TDC2 shows better performance in general. 3.3 GENERALIZING TDC++ This section provides versions of TDC++ (Ghiassian et al., 2020), which is variant of TDC. With an additional regularization term ⇠k on both updates of TDC in (7) and (8), the update is written as follows: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (26) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k k + ⇢k k(⇠k) k), (27) where ⌘ > 0 satisfies (20) and > 0 is a new parameter. Note that TDC++ can be simply viewed as variant of TDC by adding the term k in the update, which can be seen as a regularization term. Therefore, letting = 0 yields the original TDC. In this paper, we prove that our controller design leads to the following update: k+1 = k + ↵k⌘( >k k + ⇢k k(⇠k)) k k) (28) ⇠k+1 = ⇠k + ↵k( ⇢k >k k 0k + (1 ⌘) >k k k ⌘ k + ⇢k⌘ k(⇠k) k), (29) where and are new parameters and when = 1/⌘ it becomes TDC++. The difference with the original TDC++ can be seen in their corresponding O.D.E. forms. The corresponding O.D.E. for (26) and (27) (original TDC++) can be expressed as: ddt t xt = ⌘(C + I) ⌘A A> C I A t xt . Meanwhile, the O.D.E. corresponding to (28) and (29) (new TDC++) becomes ddt t xt = ⌘(C + I) ⌘A A> ⌘(C + I) ⌘A t xt . We experiment under different of and ⌘ to examine the behavior of new TDC++. The result shows that in general, smaller leads to better performance. The results are given in Appendix Section 7.9. Lemma 3.3. Consider the following O.D.E.: ̇t = ⌘(C + I) t ⌘Axt (30) ẋt = ut. (31) Suppose that we choose the control input ut := (A> ⌘(C + I)) t ⌘Axt. Assume ⌘ > 0 and and satisfies the following condition: + min(A) > min(C). Then, the above O.D.E. has globally asymptotically stable origin, i.e., ( t, xt) ! (0, 0) as t ! 1. The proof is given in Appendix Section 7.4.3. With Lemma 2.1, we can prove the convergence of stochastic update with (28) and (29) whose pseudo code is given in Algorithm 5 in Appendix. Theorem 3.3. Consider Algorithm 5 in Appendix. Under the step-size condition (33) and if ⌘ satisfies (20), then ⇠k ! ⇠⇤ as k ! 1 with probability one, where ⇠⇤ is the TD fixed point in (6). Remark 3.2. We can replace the regularization term with nonlinear terms satisfying certain condi- tions. The details are given in Appendix Section 7.6. 4 EXPERIMENTS We verify the performance and convergence of the proposed BTD under standard benchmarks to evaluate off-policy TD-learning algorithms, including Baird environment (Baird, 1995), RandomWalk (Sutton et al., 2009) with different features, and Boyan chain (Boyan, 2002). The details about the environments are given in Appendix Section 7.7. From the experiments, we see how BTD behaves under different coefficients ⌘ 2 { 0.5, 0.25, 0, 0.25, 0.5}. We measure the Root Mean-Squared Projected Bellman Error (RMSPBE) as the performance metric, and every results are averaged over 100 runs. From Table 1, the result with ⌘ = 0.5 shows the best performance except at Baird, where ⌘ = 0, corresponding to GTD2 performs best. There exist two aspects on the role of ⌘. First of all, it can be thought of as a parameter that can mitigate the effect of instability coming from matrix A in (9). For example, a smaller ⌘ can stabilize the system. However, as a trade off, if ⌘ is too small, then the update rate might be too small as well. As a result, the overall convergence can be slower. Furthermore, ⌘ also controls the effect of C in (13) in the BTD update rules, where C corresponds to ( ⌘ + ⌘2) >k k k in (17). Note that the role of ⌘ in the final BTD update rule in (17) shows different perspectives compared to that in (9). In particular, ⌘ = 1/2 maximizes the effect of C in (17). From Table 1, it leads to reasonably good performances in most domains. Another natural choice is to multiply ⌘ to C instead of A. However, in such cases, we need to introduce another constrain ⌘ > 0, whereas in the current BTD, convergence is guaranteed for all ⌘ 2 R. Finally, we note that simply multiplying C by a large positive constant does not lead to good results in general. This is because in this case, it may increase variance, and destabilize the algorithm. Overall results are given in Appendix Section 7.8. 5 CONCLUSION In this work, we have proposed a new framework to design off-policy TD-learning algorithms from control-theoretic view. Future research directions would be extending the framework to non-linear function approximation setting. 6 ACKNOWLEDGEMENTS This work was supported by the National Research Foundation under Grant NRF2021R1F1A1061613, Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT)(No.2022-0-00469), and the BK21 FOUR from the Ministry of Education (Republic of Korea). (Corresponding author: Donghwan Lee.)
1. What is the focus and contribution of the paper on gradient TD methods? 2. What are the strengths of the proposed approach, particularly in its connection to existing methods? 3. What are the weaknesses of the paper, especially regarding the development of the new algorithm? 4. Do you have any concerns about the ODEs used in the proposed approach? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a brand new approach for developing gradient TD methods from canonical control theories. The approach starts with finding an ODE of interest and then uses back-stepping to stabilizing the ODE. The approach succeeded in both recovering existing GTD methods and developing some variants. Strengths And Weaknesses Strength: This paper is well written and easy to follow. Using back-stepping to develop stable off-policy RL algorithms appears novel to me and can probably open up a new direction in RL. The authors did a good job in connecting their methods with existing ones. Though I didn't check the proof line by line, I feel confident that the proof should hold. Weaknesses: My major concern is that the proposed approach feels so hand-crafted. The development of the new algorithm starts from the ODEs in (9) and (10). I absolutely have no idea where the two ODEs come from. It seems that the ODE in (9) comes from the ODE just above (7), which is the ODE from GTD2 updates. But how do the author motivate the introduction of \eta in (9). Even we can accept that the authors somehow magically introduced this \eta, motivating (9) with the ODE from GTD still does not make sense to me. If we do not know GTD beforehand, is there still a reasonable way to write down the ODEs in (9) and (10)? If the authors cannot provide an affirmative and convincing answer, I do not think the proposed framework of using backing stepping is significant -- it can only start from existing solutions and cannot start from the problem Clarity, Quality, Novelty And Reproducibility quality, clarity and originality are good
ICLR
Title Quality Matters: Embracing Quality Clues for Robust 3D Multi-Object Tracking Abstract 3D Multi-Object Tracking (MOT) has achieved tremendous achievement thanks to the rapid development of 3D object detection and 2D MOT. Recent advanced works generally employ a series of object attributes, e.g., position, size, velocity, and appearance, to provide the clues for the association in 3D MOT. However, these cues may not be reliable due to some visual noise, such as occlusion and blur, leading to tracking performance bottleneck. To reveal the dilemma, we conduct extensive empirical analysis to expose the key bottleneck of each clue and how they correlate with each other. The analysis results motivate us to efficiently absorb the merits among all cues, and adaptively produce an optimal tacking manner. Specifically, we present Location and Velocity Quality Learning, which efficiently guides the network to estimate the quality of predicted object attributes. Based on these quality estimations, we propose a quality-aware object association (QOA) strategy to leverage the quality score as an important reference factor for achieving robust association. Despite its simplicity, extensive experiments indicate that the proposed strategy significantly boosts tracking performance by 2.2% AMOTA and our method outperforms all existing state-of-the-art works on nuScenes by a large margin. Moreover, QTrack achieves 48.0% and 51.1% AMOTA tracking performance on the nuScenes validation and test sets, which significantly reduces the performance gap between pure camera and LiDAR based trackers. 1 INTRODUCTION 3D Multi-Object Tracking (MOT) has been recently drawing increasing attention since it is widely applied to 3D perception scenes, e.g., autonomous driving, and automatic robot. The 3D MOT task aims at locating objects and associating the targets of the same identities to form tracklets. According to the used sensors, existing 3D MOT methods can mainly be categorized into two classes, i.e., camera-based and LiDARbased schemes. In this paper, we mainly delve into the camera-only scheme since it contains semantic information and is more economical. Existing 3D MOT methods mostly adopt the tracking-by-detection paradigm. In this regime, a 3d detector is firstly employed to predict 3D boxes and the corresponding classification scores, and then some post-processing methods (e.g., motion-based Kalman (1960) or appearance-based) are used to line detected targets to form trajectories. In the camera scheme, it is natural to extract objects’ discriminative appearance features Chaabane et al. (2021); Hu et al. (2022) to represent targets and use the features to measure the similarities among detected targets. However, the procedure of extracting the appearance feature is cumbersome since it requires predicting high-dimensional embedding, which is hard for joint training due to the optimization contradiction between the detection and embedding branches Yu et al. (2022b). Moreover, it is difficult to deal with the notorious occlusion and motion blur issues. Some other methods Weng et al. (2020a); Pang et al. (2021) build a motion model (Kalman Filter) to obtain some desired states of tracking clues (e.g, center position, size, size ratio, or rotation) by a linear motion assumption. Nevertheless, this process involves various hyper-parameters (e.g., initialization uncertainty of measurement, state and process, etc.) and executes complex matrix transpose operation. Different from the aforementioned methods, CenterPoint Yin et al. (2021) reasonably leverages predicted center locations and velocities of targets for building motion. In detail, it uses time lag between two moments of observations to multiply the predicted velocity for linear location prediction. Afterwards, the L2 distance among targets acts as a measurement metric for the association procedure. For simplicity, we call this tracking framework CV method. It shows effectiveness to achieve remarkable tracking performance, while only conducting a simple operation (i.e., matrix addition and multiplication) for parallel cost computation. Although the CV framework shows efficiency for 3D MOT tasks, it relies heavily on the predicted quality of center location and velocity. The requirement may be harsh for the 3D base detector, since estimating the center location and velocity of an object from a single image is exactly an ill-posed problem. As shown in Fig. 1, notorious occlusion, motion blur, and the illumination of external issues will significantly disturb the estimation performance. To further confirm this issue, we conduct an empirical analysis to study the predicted center location and velocity quality distribution as well as their correlations. Our study reveals two valuable points: (1) There exists a significant gap between the estimation error of 3D centers and that of velocities; (2) The predicted quality of location and velocity is extremely misaligned. The imbalanced tracking cues have little effect on the detection performance but play a dramatic role in MOT. The analysis cues motivate us to endow each predicted box with the self-diagnosis ability to tracking clues for realizing stable tracking association. To this end, we propose to forecast the quality of tracking clues from the base 3D detector. Specifically, we introduce a Normalized Gaussian Quality (NGQ) metric with two dimensions to measure the quality of predicting center location and velocity. NGQ metric comprehensively considers the vector errors of the two predictions in a 2D vector space, which is a prerequisite for our tracking framework. Based on the quality estimation of NGQ, we design a robust association mechanism, i.e, the Quality-aware Object Association (QOA) strategy. It adopts the velocity quality to filter out low-quality motion candidates, and leverages the location quality to further rule out center positions of boxes with bad estimations. Therefore, QOA not only effectively deals with hard cases but also avoids dangerous associations. In a sense, our method is subordinate to the idea of ”Put Quality Before Quantity” principle. By combining the proposed methods with the baseline 3D detector, we obtain a simple and robust 3D MOT framework, namely quality-aware 3D tracker (QTrack). We conduct extensive experiments on nuScenes dataset Caesar et al. (2020), showing significant improvements in the 3D MOT task. Comprehensively, the contributions of this work are summarized as follows: • We conduct extensive empirical analysis to point out that the predicted quality of center location and velocity exist a large distribution gap and misalignment relationship, making an efficient CV tracking framework fall into sub-optimal performance. • We first propose to predict the quality of velocity and location quality measured by our designed NGQ metric. Afterwards, we further introduce QOA to leverage the two qualities for insuring safe association in 3D MOT task. • The overall 3D MOT framework (QTrack) achieves SOTA performance on nuScenes dataset which outperforms other camera-based methods by a large margin. Specially, QOA improves the baseline tracker by +2.2% AMOTA among several 3D detector settings, showing its effectiveness. 2 RELATED WORK 2.1 3D MULTI-OBJECT TRACKING Thanks to the development of 3D detection Huang et al. (2021); Li et al. (2022a); Liu et al. (2022) and 2D MOT technologies Han et al. (2022); Yu et al. (2022b;a); Zhang et al. (2022b), recent 3D MOT methods Weng et al. (2020a); Yin et al. (2021); Chaabane et al. (2021); Hu et al. (2022); Pang et al. (2021) mainly follow tracking-by-detection paradigm. These trackers following this paradigm first utilize a 3D object detector to localize the targets in the 3D space (including location, rotation, and velocity) and then associate the detected objects with the trajectories according to various cues (location or appearance). Traditional 3D MOT usually uses a motion model (Kalman filter) to predict the location of the tracklets and then associate the candidate detections using 3D (G)IoU Weng et al. (2020a); Pang et al. (2021) or L2 distance Yin et al. (2021). Some works also utilize advanced appearance model (ReID) Chaabane et al. (2021); Weng et al. (2020b); Chaabane et al. (2021) or temporal model (LSTM) Marinello et al. (2022); Hu et al. (2022) to provide more reference cues for the association. Recently, Transformer Vaswani et al. (2017) has been used in 3D detection Wang et al. (2022) and MOT Li & Jin (2022); Zhang et al. (2022a) to learn 3D deep representations with 2D visual information and trajectory encoded. Although these methods achieved remarkable performance, when they are applied to complex scenarios (e.g., occlusion, motion blur, or light weakness), the tracking performance becomes unsatisfactory. In this work, we argue that a simple velocity clue with quality estimation can deal with the corner cases and achieve robust tracking performance. Our proposed QTrack focuses on how to assess the quality of the location and velocity prediction, and then make full use of these quality scores in the matching process. 2.2 PREDICTION QUALITY ESTIMATION To estimate the quality of model’s prediction is non-trivial, which can be applied to tackle prediction imbalance or decision-making. In the field of object detection, advanced works Wang et al. (2021); Tian et al. (2019); Jiang et al. (2018) introduce to predict a box’s centerness or IoU for perceiving the quality of prediction (3D) boxes. Huang et al. (2019) employ the method to perceive the mask predicted quality. These methods can alleviate the imbalance between classification score and location accuracy. Li et al. (2022c) introduces an uncertainty-based method to estimate the predicted quality of several depth factors, and then the quality is employed to make optimal decisions. In this paper, we introduce to predict the predicted quality of velocity and location. Afterwards, the predicted quality will be used to eliminate the non-robust association case of tracking task. To our knowledge, our work is the first effort to perceive the velocity and location qualities for the decision-making in 3D MOT task. 2.3 MULTI-VIEW 3D OBJECT DETECTION 3D object detection is the predecessor task for 3D MOT task. It can be split into two stream methods including point-based Lang et al. (2019); Yan et al. (2018); Yin et al. (2021); Shi et al. (2019; 2020); Yang et al. (2022c) and camera-based detectors Wang et al. (2021); Huang et al. (2021); Li et al. (2022a); Wang et al. (2022); Liu et al. (2022); Li et al. (2022b). In this paper, we focus on the 3D MOT for the multi-view camera based framework, which has made tremendous advance. Transformer based methods Wang et al. (2022); Liu et al. (2022); Li et al. (2022b) introduce 3D object queries to interact with the multi-view image feature map. 3D object queries are constantly refined to predict 3D boxes and other tasks in an end-to-end manner. BEVDet Huang et al. (2021) and BEVDepth Li et al. (2022a) directly project the multi-view image feature into BEV representation and attach a center-based head Yin et al. (2021) to conduct detection task. Standing on the shoulders of giants, we aim to equip BEVDepth with the ability to perceive the quality of velocity and center locatopn, which is the key to diagnose non-robust association for tracking. Then we introduce a novel “tracking by detection” (QTrack) to endow BEVDepth with effiective and efficient tracking. 3 METHODOLOGY 3.1 DELVE INTO THE QUALITY DISTRIBUTION We aim to solve the task of 3D multi-object tracking (3D MOT), the goal of which is to locate the objects in the 3D space and then associate the detected targets with the same identity into the tracklets. The key challenge is how to associate the tracklets efficiently and correctly. In contrast to the motion-based and appearance-based association strategies, we argue that the simple velocity clue (CV method) is enough for the association, which is more lightweight and deployment-friendly. However, the performance of the existing CV tracking framework is not satisfactory. To analyze the reason for the limited performance of tracking with velocity, we count and visualize the distribution of the prediction error between location and velocity. As illustrated in Fig. 2 (a) and (b), we can observe that the distribution of the location and velocity quality (prediction error) is scattered, and a sizable number of low-quality boxes are included. Moreover, Fig. 2 (c) shows that the distribution correlation between the location and velocity error is nonlinear, which means the quality of the location and velocity is seriously misaligned. Based on these observations, we conclude that the limited performance of tracking with velocity is due to the following reasons: (1) Low quality of the location or velocity. When one of the location and velocity predictions is not accurate enough, the tracker can not perform well even if the other prediction is reliable. (2) Misalignment between the quality of location and velocity. We should take both location and velocity quality into consideration. Driven by this analysis, we propose Location and Velocity Quality Learning to learn the quality uncertainty of the location, and velocity that can assist the tracker to select high-quality candidates for the association. 3.2 BASE 3D OBJECT DETECTOR Our method can be easily coupled with most existing 3D object detectors with end-to-end training. In this paper, we take BEVDepth Li et al. (2022a) as an example. BEVDepth is a camera-based Bird’s-Eye-View (BEV) 3D object detector that transfers the multi-view image features to the BEV feature through a depth estimation network and then localizes and classifies the objects in the BEV view. It consists four kinds of modules: an image-view encoder, a view transformer with explicit depth supervision utilizing encoded intrinsic and extrinsic parameters, a BEV encoder and a taskspecific head. The entire network is optimized with a multi-task loss function: Ldet = Ldepth + Lcls + Lreg, (1) where the depth loss Ldepth, classification loss Lcls and regression loss Lreg remain the same setting as the original paper. As illustrated in Fig. 3, the task of the regression branch includes heatmap, offsets, height, size, rotation and velocity. 3.3 LOCATION AND VELOCITY QUALITY LEARNING To effectively estimate the quality of location and velocity, it first needs to define the quality measurement metric. Technically, the box’s center location is calculated by incorporating predicted heatmap and corresponding offsets so that the location quality can be simplified to offset predicted quality. Specially, the offsets and velocity are defined in a 2-dimensional vector space. We introduce a Normalized Gaussian Quality (NGQ) metric to represent their quality. Given a predicted vector P ∈ R2 and ground truth vector G ∈ R2, we formulate NGQ metric as: NGQ = e− √ (Px−Gx)2+(Py−Gy)2 γ , (2) where the subscripts x and y indicate the value in the x and y directions while γ is a hyperparameter to control the value distribution of NGQ. We set γ to 1.0 and 3.0 for location and velocity, respectively. P and G can be instantiated as predicting offset and velocity. When the prediction is equal to ground truth, NGQ = 1, while the predicted error is larger, NGQ is closer to 0. After defining the quality, we elaborate on how to learn it. As shown in Fig. 3, we attach a 3×3 convolution layer for offset and velocity branch to predict location quality NGQloc ∈ R1 and velocity quality NGQvel ∈ R1, respectively. The quality supervision is conducted by binary cross entropy (BCE) loss: Lquality = − 1 N N∑ i=1 [ ˆNGQi · log NGQi + (1−NGQi) · log (1− ˆNGQi)], (3) where ˆNGQ is the ground truth quality calculated by Eq. 2. This far, the total loss for our detector is formulated as: Ltotal = Ldet + Lquality. (4) The overall training procedure is an end-to-end manner while the quality prediction task will not damage the performance of the base detector. Moreover, the quality estimation is used in our proposed Quality-aware Object Association (QOA) module, which will be discussed next section. 3.4 QUALITY-AWARE OBJECT ASSOCIATION After obtaining the quality of the center location and velocity, we have more reference cues to achieve robust and accurate association. To this end, we propose a simple but effective qualityaware object association strategy (QOA). Specifically, QOA sets up two ”gates”. The first gate is the classification confidence score (cls score). We first separate the candidate detection boxes into high score ones and low score ones according to their cls scores. The high score candidates are first associated with the tracklets. Then the unmatched tracklets are associated with the low score candidates. These low score candidates are most caused by occlusion, motion blur, or light weakness, which are easily confused with the miscellaneous boxes. To deal with the issue, the second gate, quality uncertainty score, is introduced. After getting the second association results between the unmatched tracklets and the low score candidates, we then recheck the matched trackdet pairs according to the location and velocity quality scores. Only high-quality matched track-det pairs can remain and low-quality pairs are regarded as the mismatch. The pseudo-code of QOA is shown in Algorithm 1. Benefiting from the quality estimation, QOA does not need a complex motion or appearance model to provide association cues. A simple velocity prediction (CV) is enough (line #15). Hence, we use the velocity of the tracklet at frame t − 1 to predict the center location at frame t and then Algorithm 1: Pseudo-code of QOA. Input: A video sequence V; object detector Det; detection score threshold τ ; quality score threshold µv , µt Output: Tracks T of the video 1 Initialization: T ← ∅ 2 for frame fk in V do /* boxes & scores */ 3 Dk ← Det(fk) 4 Dhigh ← ∅ 5 Dlow ← ∅ /* first gate */ 6 for d in Dk do 7 if d.score > τ then 8 Dhigh ← Dhigh ∪ {d} 9 end 10 else 11 Dlow ← Dlow ∪ {d} 12 end 13 end /* predict location */ 14 for t in T do 15 t← CV(t) 16 end /* association with high scores */ 17 Associate T and Dhigh using L2 distance 18 Dremain ← remaining object boxes from Dhigh 19 Tremain ← remaining tracks from T /* association with low scores */ 20 Associate Tremain and Dlow using L2 distance 21 Tsec,Dsec ← matched pairs from Tremain ,Dlow 22 Tre−remain ← remaining tracks from Tremain /* second gate */ 23 for t, d in Tsec, Dsec do 24 if t.vscore < µv or d.lscore < µt then 25 Tre−remain ← Tre−remain ∪ {t} 26 end 27 end /* update and initialize */ 28 T ← T \ Tre−remain 29 for d in Dremain do 30 T ← T ∪ {d} 31 end 32 end 33 Return: T compute the L2 distance between predictions and candidate detections (line #17 and line #20) as the similarity. At last, we apply the similarity with the Hungarian algorithm to get the association results. Mathematically, ct = ct−1 + vt−1∆t cost = L2(ct, dt) match = Hungarian(cost), (5) where ct−1, vt−1 represents the center location and velocity of the tracklets at frame t− 1. dt is the candidate detection center location at frame t and ∆t is the time lag. 4 EXPERIMENTS 4.1 DATASETS AND METRICS Datasets. We mainly evaluate our QTrack on the 3D detection and tracking datasets of nuScenes. nuScenes dataset is a large-scale autonomous driving benchmark that consists of 1000 real-world sequences, 700 sequences for training, 150 for validation, and 150 for the test. Each sequence has roughly 40 keyframes, which are annotated by each sensor (e.g., LiDAR, Radar, and Camera) with a sampling rate of 2 FPS. Each frame includes images from six cameras with a full 360-degree field of view. For the detection task, there are 1.4 M annotated 3D bounding boxes from 10 categories. For the tracking task, it provides 3D tracking bounding boxes from 7 categories. Metrics. For 3D detection task, we report nuScenes Detection Score (NDS), mean Average Prediction (mAP), as well as five True Positive (TP) metrics including mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE). For 3D tracking task, we report Average Multi-object Tracking Accuracy (AMOTA) and Average Multi-object Tracking Precision (AMOTP). We also report metrics used in 2D tracking task from CLEAR Bernardin et al. (2006), e.g., MOTA, MOTP, and IDS. 4.2 IMPLEMENTATION DETAILS Following BEVdepth, we adopt three types of backbone: ResNet-50 He et al. (2016), ResNet-101, and VoVNet-99 (Initialized from DD3D Park et al. (2021)) as the image backbone. If not specified, the image size is processed to 256×704. The data augmentation includes random cropping, random scaling, random flipping, and random rotation. In addition, we also adopt BEV data augmentations including random scaling, random flipping, and random rotation. We use AdamW as optimizer with learning rate of 2 × 10−4 and batch size of 64. When compared with other methods, QTrack is trained for 24 epochs for ResNet and 20 epochs for VoVNet with CBGS Zhu et al. (2019). 4.3 COMPARISION WITH PRECEDING SOTAS Test and validation set. We compare the performance of QTrack with preceding SOTA methods on the nuScenes benchmark. The results are reported in Tab. 1. Our QTrack outperforms all current SOTA methods for the camera-based trackers by a large margin. For both validation and test sets, all reported metrics (e.g., AMOTA, AMOTP, RECALL, IDS, etc.) achieve best performance. Specially, AMOTA result of QTrack first achieves 0.511, which significantly reduces the performance gap between the pure camera and LiDAR-based trackers. Compare with other post-processing trackers. Tab. 2 illustrates that QTrack outperforms the naive Kalman filter based method and its advanced variant from SimpleTrack Pang et al. (2021) by employing identical 3D detector and backbone settings. Moreover, our method only needs simple operations (i.e., Matrix multiplication and addition) for tracking procedure, while Kalman filter based ones need relatively complex operation like matrix transpose and the complex process for adjusting hyper-parameters. The overall tracking framework is significantly efficient and will not trigger a serious latency, which is fatal in a real perception scenario Yang et al. (2022a;b). 4.4 ABLATION STUDY In this subsection, we verify the effectiveness of the proposed strategies separately through ablation studies. All the experiments are conducted on the nuScenes val set. Analysis of the location and velocity quality for tracking. In this part, we conduct an in-depth analysis on the location and velocity quality score for the association process. As mentioned before, location and velocity quality scores are obtained by the quality branch. Then they are both regarded as the reference clues to filter the low classification confidence association results in QOA. We verify the performance of only using one of them as the second gate of QOA, and the results are reported in Tab. 3. As shown, only using one of the location and velocity quality scores does not contribute to the tracking performance, which confirms our analysis that the location and velocity quality is not aligned and we should take both of them into consideration. Analysis of the components of QTrack. In this part, we verify the effectiveness of various components in QTrack through an ablation study. As shown in Tab. 4, the first row of the table shows baseline performance for tracking when using BEVDepth detections followed by a simple velocity association step (CV method). We can observe that the two gates of QOA can both develop the tracking performance in the all settings (ResNet-50, ResNet-101 or VoVNet-99, single-frame or multi-frame), which means that the filter for the low-quality association results is necessary. Furthermore, we can observe that the metric of IDS increases when applying the first gate by classification confidence score. This phenomenon shows that only considering confidence score inevitably introduces low-quality bounding boxes, which causes bad association cases. Therefore, the second gate, quality score, can provide a fine-grained reference to achieve a better association trade-off. Influence on base 3D detector. As shown in Tab. 5, it proves that adding quality prediction branch does not affect the performance of base 3D detector. This is an extremely important property since post-processing trackers normally rely on the super performance of detector. Going one step further, we report the tracking performance by employing existing CV and SimpleTrack scheme. It reveals that tracking performance will not be affected by our quality branch, which agrees with our designing purpose of Sec. 1. Then, we explore to append an appearance branch for extracting instance wised appearance embedding, which implement is the same as Zhang et al. (2021). The results show that slight performance degradation (nearly 0.5%) is triggered on detection task, but it significantly damages the performance of tracking task by nearly 1.0%. It reflects that our method is more effective and efficient. 4.5 DISCUSSION AND FUTURE WORK Inspired by Jiang et al. (2018); Wu et al. (2020); Yang et al. (2022d), we explore to incorporate velocity quality V with classification score C as M , which is adopted to act as threshold metric in NMS procedure. Technically, we formulate M in Eq. 6, in which α is a hyper-parameter to control the contribution of V . M = V (1−α) · Cα, (6) As shown in Fig. 4, we plot the four performance metrics of detection task by controlling α. It reflects that as contribution of V becomes bigger, mAVE drops dramatically. However, it also brings about inevitable performance degradation for mAP and mATE metrics. NDS, as a comprehensive metric, becomes better and then gets worse as α changes larger, which is actually a trade-off between location error and velocity error. This phenomenon agrees with our viewpoint in Sec. 1, i.e., the quality of these two prediction tasks are not aligned. Combining the performance of detection and tracking tasks with respect to above imbalance issue, it exposes a challenge: how to design a method to simultaneously predict location (or 3D box) and velocity well? This challenge can help further boost performance of 3D detection task or other downstream tasks like 3D MOT. 5 CONCLUSION In this paper, we analyze the imbalance prediction quality distribution of location and velocity. It motivates us to propose a Quality-aware Object Association (QOA) method to alleviate the imbalance issue for 3D multi-object tracking (3D MOT). To this end, we introduce Normalized Gaussian Quality (NGQ) metric to measure the predicted quality of location and velocity, and structure an effective module for quality learning. Afterwards, we further present QTrack, an “tracking by detection” framework for 3D MOT in multi-view camera scene, which incorporats with QOA to perform tracking procedure. The extensive experiments demonstrate the efficacy and robustness of our method. Finally, we release a challenge to inspire more research to focus on the imbalance between localization and velocity qualities for both 3D detection and tracking tasks.
1. What is the focus and contribution of the paper on 3D MOT? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison with other works like ImmortalTracker? 3. Do you have any concerns or suggestions regarding the usage of quality scores in the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes to predict a quality score for location and velocity from detector, and then use these quality scores to prune unreliable association in tracking. The results demonstrate its effectiveness in NuScenes benchmark. Strengths And Weaknesses 1/ As the simplest and effective baseline for 3D MOT, it is necessary to compare proposed method with ImmortalTracker[1]. The conclusion from [1] is somewhat contradictory to that from this paper. Also, ImmortalTracker does not rely on accurate velocity prediction as motion model. It may alleviate the aforementioned problem in this paper. It is also important to report results based on LiDAR based detectors. It is valuable to know whether this phenomenon still exists for LiDAR based methods. There are a lot of possible more principled way to use quality score, for example, it may be treated as variance in motion model and association. I really appreciate the findings from the authors, however in a conference like ICLR, I would like to see more analyses and methods in depth. The writing needs improvement. The authors should polish the paper carefully. [1] Wang, Qitai, et al. "Immortal Tracker: Tracklet Never Dies." arXiv preprint arXiv:2111.13672 (2021). Clarity, Quality, Novelty And Reproducibility See above
ICLR
Title Quality Matters: Embracing Quality Clues for Robust 3D Multi-Object Tracking Abstract 3D Multi-Object Tracking (MOT) has achieved tremendous achievement thanks to the rapid development of 3D object detection and 2D MOT. Recent advanced works generally employ a series of object attributes, e.g., position, size, velocity, and appearance, to provide the clues for the association in 3D MOT. However, these cues may not be reliable due to some visual noise, such as occlusion and blur, leading to tracking performance bottleneck. To reveal the dilemma, we conduct extensive empirical analysis to expose the key bottleneck of each clue and how they correlate with each other. The analysis results motivate us to efficiently absorb the merits among all cues, and adaptively produce an optimal tacking manner. Specifically, we present Location and Velocity Quality Learning, which efficiently guides the network to estimate the quality of predicted object attributes. Based on these quality estimations, we propose a quality-aware object association (QOA) strategy to leverage the quality score as an important reference factor for achieving robust association. Despite its simplicity, extensive experiments indicate that the proposed strategy significantly boosts tracking performance by 2.2% AMOTA and our method outperforms all existing state-of-the-art works on nuScenes by a large margin. Moreover, QTrack achieves 48.0% and 51.1% AMOTA tracking performance on the nuScenes validation and test sets, which significantly reduces the performance gap between pure camera and LiDAR based trackers. 1 INTRODUCTION 3D Multi-Object Tracking (MOT) has been recently drawing increasing attention since it is widely applied to 3D perception scenes, e.g., autonomous driving, and automatic robot. The 3D MOT task aims at locating objects and associating the targets of the same identities to form tracklets. According to the used sensors, existing 3D MOT methods can mainly be categorized into two classes, i.e., camera-based and LiDARbased schemes. In this paper, we mainly delve into the camera-only scheme since it contains semantic information and is more economical. Existing 3D MOT methods mostly adopt the tracking-by-detection paradigm. In this regime, a 3d detector is firstly employed to predict 3D boxes and the corresponding classification scores, and then some post-processing methods (e.g., motion-based Kalman (1960) or appearance-based) are used to line detected targets to form trajectories. In the camera scheme, it is natural to extract objects’ discriminative appearance features Chaabane et al. (2021); Hu et al. (2022) to represent targets and use the features to measure the similarities among detected targets. However, the procedure of extracting the appearance feature is cumbersome since it requires predicting high-dimensional embedding, which is hard for joint training due to the optimization contradiction between the detection and embedding branches Yu et al. (2022b). Moreover, it is difficult to deal with the notorious occlusion and motion blur issues. Some other methods Weng et al. (2020a); Pang et al. (2021) build a motion model (Kalman Filter) to obtain some desired states of tracking clues (e.g, center position, size, size ratio, or rotation) by a linear motion assumption. Nevertheless, this process involves various hyper-parameters (e.g., initialization uncertainty of measurement, state and process, etc.) and executes complex matrix transpose operation. Different from the aforementioned methods, CenterPoint Yin et al. (2021) reasonably leverages predicted center locations and velocities of targets for building motion. In detail, it uses time lag between two moments of observations to multiply the predicted velocity for linear location prediction. Afterwards, the L2 distance among targets acts as a measurement metric for the association procedure. For simplicity, we call this tracking framework CV method. It shows effectiveness to achieve remarkable tracking performance, while only conducting a simple operation (i.e., matrix addition and multiplication) for parallel cost computation. Although the CV framework shows efficiency for 3D MOT tasks, it relies heavily on the predicted quality of center location and velocity. The requirement may be harsh for the 3D base detector, since estimating the center location and velocity of an object from a single image is exactly an ill-posed problem. As shown in Fig. 1, notorious occlusion, motion blur, and the illumination of external issues will significantly disturb the estimation performance. To further confirm this issue, we conduct an empirical analysis to study the predicted center location and velocity quality distribution as well as their correlations. Our study reveals two valuable points: (1) There exists a significant gap between the estimation error of 3D centers and that of velocities; (2) The predicted quality of location and velocity is extremely misaligned. The imbalanced tracking cues have little effect on the detection performance but play a dramatic role in MOT. The analysis cues motivate us to endow each predicted box with the self-diagnosis ability to tracking clues for realizing stable tracking association. To this end, we propose to forecast the quality of tracking clues from the base 3D detector. Specifically, we introduce a Normalized Gaussian Quality (NGQ) metric with two dimensions to measure the quality of predicting center location and velocity. NGQ metric comprehensively considers the vector errors of the two predictions in a 2D vector space, which is a prerequisite for our tracking framework. Based on the quality estimation of NGQ, we design a robust association mechanism, i.e, the Quality-aware Object Association (QOA) strategy. It adopts the velocity quality to filter out low-quality motion candidates, and leverages the location quality to further rule out center positions of boxes with bad estimations. Therefore, QOA not only effectively deals with hard cases but also avoids dangerous associations. In a sense, our method is subordinate to the idea of ”Put Quality Before Quantity” principle. By combining the proposed methods with the baseline 3D detector, we obtain a simple and robust 3D MOT framework, namely quality-aware 3D tracker (QTrack). We conduct extensive experiments on nuScenes dataset Caesar et al. (2020), showing significant improvements in the 3D MOT task. Comprehensively, the contributions of this work are summarized as follows: • We conduct extensive empirical analysis to point out that the predicted quality of center location and velocity exist a large distribution gap and misalignment relationship, making an efficient CV tracking framework fall into sub-optimal performance. • We first propose to predict the quality of velocity and location quality measured by our designed NGQ metric. Afterwards, we further introduce QOA to leverage the two qualities for insuring safe association in 3D MOT task. • The overall 3D MOT framework (QTrack) achieves SOTA performance on nuScenes dataset which outperforms other camera-based methods by a large margin. Specially, QOA improves the baseline tracker by +2.2% AMOTA among several 3D detector settings, showing its effectiveness. 2 RELATED WORK 2.1 3D MULTI-OBJECT TRACKING Thanks to the development of 3D detection Huang et al. (2021); Li et al. (2022a); Liu et al. (2022) and 2D MOT technologies Han et al. (2022); Yu et al. (2022b;a); Zhang et al. (2022b), recent 3D MOT methods Weng et al. (2020a); Yin et al. (2021); Chaabane et al. (2021); Hu et al. (2022); Pang et al. (2021) mainly follow tracking-by-detection paradigm. These trackers following this paradigm first utilize a 3D object detector to localize the targets in the 3D space (including location, rotation, and velocity) and then associate the detected objects with the trajectories according to various cues (location or appearance). Traditional 3D MOT usually uses a motion model (Kalman filter) to predict the location of the tracklets and then associate the candidate detections using 3D (G)IoU Weng et al. (2020a); Pang et al. (2021) or L2 distance Yin et al. (2021). Some works also utilize advanced appearance model (ReID) Chaabane et al. (2021); Weng et al. (2020b); Chaabane et al. (2021) or temporal model (LSTM) Marinello et al. (2022); Hu et al. (2022) to provide more reference cues for the association. Recently, Transformer Vaswani et al. (2017) has been used in 3D detection Wang et al. (2022) and MOT Li & Jin (2022); Zhang et al. (2022a) to learn 3D deep representations with 2D visual information and trajectory encoded. Although these methods achieved remarkable performance, when they are applied to complex scenarios (e.g., occlusion, motion blur, or light weakness), the tracking performance becomes unsatisfactory. In this work, we argue that a simple velocity clue with quality estimation can deal with the corner cases and achieve robust tracking performance. Our proposed QTrack focuses on how to assess the quality of the location and velocity prediction, and then make full use of these quality scores in the matching process. 2.2 PREDICTION QUALITY ESTIMATION To estimate the quality of model’s prediction is non-trivial, which can be applied to tackle prediction imbalance or decision-making. In the field of object detection, advanced works Wang et al. (2021); Tian et al. (2019); Jiang et al. (2018) introduce to predict a box’s centerness or IoU for perceiving the quality of prediction (3D) boxes. Huang et al. (2019) employ the method to perceive the mask predicted quality. These methods can alleviate the imbalance between classification score and location accuracy. Li et al. (2022c) introduces an uncertainty-based method to estimate the predicted quality of several depth factors, and then the quality is employed to make optimal decisions. In this paper, we introduce to predict the predicted quality of velocity and location. Afterwards, the predicted quality will be used to eliminate the non-robust association case of tracking task. To our knowledge, our work is the first effort to perceive the velocity and location qualities for the decision-making in 3D MOT task. 2.3 MULTI-VIEW 3D OBJECT DETECTION 3D object detection is the predecessor task for 3D MOT task. It can be split into two stream methods including point-based Lang et al. (2019); Yan et al. (2018); Yin et al. (2021); Shi et al. (2019; 2020); Yang et al. (2022c) and camera-based detectors Wang et al. (2021); Huang et al. (2021); Li et al. (2022a); Wang et al. (2022); Liu et al. (2022); Li et al. (2022b). In this paper, we focus on the 3D MOT for the multi-view camera based framework, which has made tremendous advance. Transformer based methods Wang et al. (2022); Liu et al. (2022); Li et al. (2022b) introduce 3D object queries to interact with the multi-view image feature map. 3D object queries are constantly refined to predict 3D boxes and other tasks in an end-to-end manner. BEVDet Huang et al. (2021) and BEVDepth Li et al. (2022a) directly project the multi-view image feature into BEV representation and attach a center-based head Yin et al. (2021) to conduct detection task. Standing on the shoulders of giants, we aim to equip BEVDepth with the ability to perceive the quality of velocity and center locatopn, which is the key to diagnose non-robust association for tracking. Then we introduce a novel “tracking by detection” (QTrack) to endow BEVDepth with effiective and efficient tracking. 3 METHODOLOGY 3.1 DELVE INTO THE QUALITY DISTRIBUTION We aim to solve the task of 3D multi-object tracking (3D MOT), the goal of which is to locate the objects in the 3D space and then associate the detected targets with the same identity into the tracklets. The key challenge is how to associate the tracklets efficiently and correctly. In contrast to the motion-based and appearance-based association strategies, we argue that the simple velocity clue (CV method) is enough for the association, which is more lightweight and deployment-friendly. However, the performance of the existing CV tracking framework is not satisfactory. To analyze the reason for the limited performance of tracking with velocity, we count and visualize the distribution of the prediction error between location and velocity. As illustrated in Fig. 2 (a) and (b), we can observe that the distribution of the location and velocity quality (prediction error) is scattered, and a sizable number of low-quality boxes are included. Moreover, Fig. 2 (c) shows that the distribution correlation between the location and velocity error is nonlinear, which means the quality of the location and velocity is seriously misaligned. Based on these observations, we conclude that the limited performance of tracking with velocity is due to the following reasons: (1) Low quality of the location or velocity. When one of the location and velocity predictions is not accurate enough, the tracker can not perform well even if the other prediction is reliable. (2) Misalignment between the quality of location and velocity. We should take both location and velocity quality into consideration. Driven by this analysis, we propose Location and Velocity Quality Learning to learn the quality uncertainty of the location, and velocity that can assist the tracker to select high-quality candidates for the association. 3.2 BASE 3D OBJECT DETECTOR Our method can be easily coupled with most existing 3D object detectors with end-to-end training. In this paper, we take BEVDepth Li et al. (2022a) as an example. BEVDepth is a camera-based Bird’s-Eye-View (BEV) 3D object detector that transfers the multi-view image features to the BEV feature through a depth estimation network and then localizes and classifies the objects in the BEV view. It consists four kinds of modules: an image-view encoder, a view transformer with explicit depth supervision utilizing encoded intrinsic and extrinsic parameters, a BEV encoder and a taskspecific head. The entire network is optimized with a multi-task loss function: Ldet = Ldepth + Lcls + Lreg, (1) where the depth loss Ldepth, classification loss Lcls and regression loss Lreg remain the same setting as the original paper. As illustrated in Fig. 3, the task of the regression branch includes heatmap, offsets, height, size, rotation and velocity. 3.3 LOCATION AND VELOCITY QUALITY LEARNING To effectively estimate the quality of location and velocity, it first needs to define the quality measurement metric. Technically, the box’s center location is calculated by incorporating predicted heatmap and corresponding offsets so that the location quality can be simplified to offset predicted quality. Specially, the offsets and velocity are defined in a 2-dimensional vector space. We introduce a Normalized Gaussian Quality (NGQ) metric to represent their quality. Given a predicted vector P ∈ R2 and ground truth vector G ∈ R2, we formulate NGQ metric as: NGQ = e− √ (Px−Gx)2+(Py−Gy)2 γ , (2) where the subscripts x and y indicate the value in the x and y directions while γ is a hyperparameter to control the value distribution of NGQ. We set γ to 1.0 and 3.0 for location and velocity, respectively. P and G can be instantiated as predicting offset and velocity. When the prediction is equal to ground truth, NGQ = 1, while the predicted error is larger, NGQ is closer to 0. After defining the quality, we elaborate on how to learn it. As shown in Fig. 3, we attach a 3×3 convolution layer for offset and velocity branch to predict location quality NGQloc ∈ R1 and velocity quality NGQvel ∈ R1, respectively. The quality supervision is conducted by binary cross entropy (BCE) loss: Lquality = − 1 N N∑ i=1 [ ˆNGQi · log NGQi + (1−NGQi) · log (1− ˆNGQi)], (3) where ˆNGQ is the ground truth quality calculated by Eq. 2. This far, the total loss for our detector is formulated as: Ltotal = Ldet + Lquality. (4) The overall training procedure is an end-to-end manner while the quality prediction task will not damage the performance of the base detector. Moreover, the quality estimation is used in our proposed Quality-aware Object Association (QOA) module, which will be discussed next section. 3.4 QUALITY-AWARE OBJECT ASSOCIATION After obtaining the quality of the center location and velocity, we have more reference cues to achieve robust and accurate association. To this end, we propose a simple but effective qualityaware object association strategy (QOA). Specifically, QOA sets up two ”gates”. The first gate is the classification confidence score (cls score). We first separate the candidate detection boxes into high score ones and low score ones according to their cls scores. The high score candidates are first associated with the tracklets. Then the unmatched tracklets are associated with the low score candidates. These low score candidates are most caused by occlusion, motion blur, or light weakness, which are easily confused with the miscellaneous boxes. To deal with the issue, the second gate, quality uncertainty score, is introduced. After getting the second association results between the unmatched tracklets and the low score candidates, we then recheck the matched trackdet pairs according to the location and velocity quality scores. Only high-quality matched track-det pairs can remain and low-quality pairs are regarded as the mismatch. The pseudo-code of QOA is shown in Algorithm 1. Benefiting from the quality estimation, QOA does not need a complex motion or appearance model to provide association cues. A simple velocity prediction (CV) is enough (line #15). Hence, we use the velocity of the tracklet at frame t − 1 to predict the center location at frame t and then Algorithm 1: Pseudo-code of QOA. Input: A video sequence V; object detector Det; detection score threshold τ ; quality score threshold µv , µt Output: Tracks T of the video 1 Initialization: T ← ∅ 2 for frame fk in V do /* boxes & scores */ 3 Dk ← Det(fk) 4 Dhigh ← ∅ 5 Dlow ← ∅ /* first gate */ 6 for d in Dk do 7 if d.score > τ then 8 Dhigh ← Dhigh ∪ {d} 9 end 10 else 11 Dlow ← Dlow ∪ {d} 12 end 13 end /* predict location */ 14 for t in T do 15 t← CV(t) 16 end /* association with high scores */ 17 Associate T and Dhigh using L2 distance 18 Dremain ← remaining object boxes from Dhigh 19 Tremain ← remaining tracks from T /* association with low scores */ 20 Associate Tremain and Dlow using L2 distance 21 Tsec,Dsec ← matched pairs from Tremain ,Dlow 22 Tre−remain ← remaining tracks from Tremain /* second gate */ 23 for t, d in Tsec, Dsec do 24 if t.vscore < µv or d.lscore < µt then 25 Tre−remain ← Tre−remain ∪ {t} 26 end 27 end /* update and initialize */ 28 T ← T \ Tre−remain 29 for d in Dremain do 30 T ← T ∪ {d} 31 end 32 end 33 Return: T compute the L2 distance between predictions and candidate detections (line #17 and line #20) as the similarity. At last, we apply the similarity with the Hungarian algorithm to get the association results. Mathematically, ct = ct−1 + vt−1∆t cost = L2(ct, dt) match = Hungarian(cost), (5) where ct−1, vt−1 represents the center location and velocity of the tracklets at frame t− 1. dt is the candidate detection center location at frame t and ∆t is the time lag. 4 EXPERIMENTS 4.1 DATASETS AND METRICS Datasets. We mainly evaluate our QTrack on the 3D detection and tracking datasets of nuScenes. nuScenes dataset is a large-scale autonomous driving benchmark that consists of 1000 real-world sequences, 700 sequences for training, 150 for validation, and 150 for the test. Each sequence has roughly 40 keyframes, which are annotated by each sensor (e.g., LiDAR, Radar, and Camera) with a sampling rate of 2 FPS. Each frame includes images from six cameras with a full 360-degree field of view. For the detection task, there are 1.4 M annotated 3D bounding boxes from 10 categories. For the tracking task, it provides 3D tracking bounding boxes from 7 categories. Metrics. For 3D detection task, we report nuScenes Detection Score (NDS), mean Average Prediction (mAP), as well as five True Positive (TP) metrics including mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE). For 3D tracking task, we report Average Multi-object Tracking Accuracy (AMOTA) and Average Multi-object Tracking Precision (AMOTP). We also report metrics used in 2D tracking task from CLEAR Bernardin et al. (2006), e.g., MOTA, MOTP, and IDS. 4.2 IMPLEMENTATION DETAILS Following BEVdepth, we adopt three types of backbone: ResNet-50 He et al. (2016), ResNet-101, and VoVNet-99 (Initialized from DD3D Park et al. (2021)) as the image backbone. If not specified, the image size is processed to 256×704. The data augmentation includes random cropping, random scaling, random flipping, and random rotation. In addition, we also adopt BEV data augmentations including random scaling, random flipping, and random rotation. We use AdamW as optimizer with learning rate of 2 × 10−4 and batch size of 64. When compared with other methods, QTrack is trained for 24 epochs for ResNet and 20 epochs for VoVNet with CBGS Zhu et al. (2019). 4.3 COMPARISION WITH PRECEDING SOTAS Test and validation set. We compare the performance of QTrack with preceding SOTA methods on the nuScenes benchmark. The results are reported in Tab. 1. Our QTrack outperforms all current SOTA methods for the camera-based trackers by a large margin. For both validation and test sets, all reported metrics (e.g., AMOTA, AMOTP, RECALL, IDS, etc.) achieve best performance. Specially, AMOTA result of QTrack first achieves 0.511, which significantly reduces the performance gap between the pure camera and LiDAR-based trackers. Compare with other post-processing trackers. Tab. 2 illustrates that QTrack outperforms the naive Kalman filter based method and its advanced variant from SimpleTrack Pang et al. (2021) by employing identical 3D detector and backbone settings. Moreover, our method only needs simple operations (i.e., Matrix multiplication and addition) for tracking procedure, while Kalman filter based ones need relatively complex operation like matrix transpose and the complex process for adjusting hyper-parameters. The overall tracking framework is significantly efficient and will not trigger a serious latency, which is fatal in a real perception scenario Yang et al. (2022a;b). 4.4 ABLATION STUDY In this subsection, we verify the effectiveness of the proposed strategies separately through ablation studies. All the experiments are conducted on the nuScenes val set. Analysis of the location and velocity quality for tracking. In this part, we conduct an in-depth analysis on the location and velocity quality score for the association process. As mentioned before, location and velocity quality scores are obtained by the quality branch. Then they are both regarded as the reference clues to filter the low classification confidence association results in QOA. We verify the performance of only using one of them as the second gate of QOA, and the results are reported in Tab. 3. As shown, only using one of the location and velocity quality scores does not contribute to the tracking performance, which confirms our analysis that the location and velocity quality is not aligned and we should take both of them into consideration. Analysis of the components of QTrack. In this part, we verify the effectiveness of various components in QTrack through an ablation study. As shown in Tab. 4, the first row of the table shows baseline performance for tracking when using BEVDepth detections followed by a simple velocity association step (CV method). We can observe that the two gates of QOA can both develop the tracking performance in the all settings (ResNet-50, ResNet-101 or VoVNet-99, single-frame or multi-frame), which means that the filter for the low-quality association results is necessary. Furthermore, we can observe that the metric of IDS increases when applying the first gate by classification confidence score. This phenomenon shows that only considering confidence score inevitably introduces low-quality bounding boxes, which causes bad association cases. Therefore, the second gate, quality score, can provide a fine-grained reference to achieve a better association trade-off. Influence on base 3D detector. As shown in Tab. 5, it proves that adding quality prediction branch does not affect the performance of base 3D detector. This is an extremely important property since post-processing trackers normally rely on the super performance of detector. Going one step further, we report the tracking performance by employing existing CV and SimpleTrack scheme. It reveals that tracking performance will not be affected by our quality branch, which agrees with our designing purpose of Sec. 1. Then, we explore to append an appearance branch for extracting instance wised appearance embedding, which implement is the same as Zhang et al. (2021). The results show that slight performance degradation (nearly 0.5%) is triggered on detection task, but it significantly damages the performance of tracking task by nearly 1.0%. It reflects that our method is more effective and efficient. 4.5 DISCUSSION AND FUTURE WORK Inspired by Jiang et al. (2018); Wu et al. (2020); Yang et al. (2022d), we explore to incorporate velocity quality V with classification score C as M , which is adopted to act as threshold metric in NMS procedure. Technically, we formulate M in Eq. 6, in which α is a hyper-parameter to control the contribution of V . M = V (1−α) · Cα, (6) As shown in Fig. 4, we plot the four performance metrics of detection task by controlling α. It reflects that as contribution of V becomes bigger, mAVE drops dramatically. However, it also brings about inevitable performance degradation for mAP and mATE metrics. NDS, as a comprehensive metric, becomes better and then gets worse as α changes larger, which is actually a trade-off between location error and velocity error. This phenomenon agrees with our viewpoint in Sec. 1, i.e., the quality of these two prediction tasks are not aligned. Combining the performance of detection and tracking tasks with respect to above imbalance issue, it exposes a challenge: how to design a method to simultaneously predict location (or 3D box) and velocity well? This challenge can help further boost performance of 3D detection task or other downstream tasks like 3D MOT. 5 CONCLUSION In this paper, we analyze the imbalance prediction quality distribution of location and velocity. It motivates us to propose a Quality-aware Object Association (QOA) method to alleviate the imbalance issue for 3D multi-object tracking (3D MOT). To this end, we introduce Normalized Gaussian Quality (NGQ) metric to measure the predicted quality of location and velocity, and structure an effective module for quality learning. Afterwards, we further present QTrack, an “tracking by detection” framework for 3D MOT in multi-view camera scene, which incorporats with QOA to perform tracking procedure. The extensive experiments demonstrate the efficacy and robustness of our method. Finally, we release a challenge to inspire more research to focus on the imbalance between localization and velocity qualities for both 3D detection and tracking tasks.
1. What is the main contribution of the paper regarding box association in tracking? 2. What are the strengths and weaknesses of the proposed method, particularly in its performance improvement and contribution compared to other works? 3. Do you have any concerns or questions regarding the experiments and their analysis, such as the selection of hyperparameters and gating scores? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the paper, such as providing more supporting arguments or discussing the applicability of the method to lidar-based detections?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors proposed to leverage the quality scores of velocity and position predictions from the detection networks to gate the box association in tracking. This is motivated by the fact that the position and velocity of the detected boxes are the critical attributes used in the tracking association. Since the error distribution of these two attributes are not similar, we will need both of them in the gating. To obtain the quality scores from the detection networks, the authors used normalised Gaussian Quality to facilitate the learning. In experiments, combined with using boxes with lower classification confidence scores, the proposed method can achieve state of the art performance on nuScenes vision only tracking task. Strengths And Weaknesses Strengthes: The proposed method achieved impressive performance on the nuScenes vision MOT leaderboard. Weaknesses: My primary concern is that from Table 4, the performance improvement mostly comes from the classification confidence score gating. The contribution from Q seems not very significant. The confidence score gating method is similar to ByteTrack, which limits the contribution and novelty of the paper. Following (1), the ByteTrack paper is not cited clearly in the sections of related works and proposed method. In the Figure 1, the authors claimed that occlusion, illumination variance, motion blur are affecting the quality of the prediction. For me, these seem very general issues. There are no experiments to show that the quality scores are indeed reflecting the velocity or translation errors. Maybe another plot on the validation set is needed. In the experiments, some analysis is missing. For example, why using BCE for the quality scores? How is the performance of regression loss such as L1 and L2 norm? How about the hyper-parameter tuning like \gamma? How to select the threshold for gating scores in Algorithm 1? In the evaluation of the detection performance, only mAP and NDS are reported. However, it is more important to report mATE and mAVE since the quality scores are adopted for them. I am not clear why this method is only used for vision based tracker. Did the authors try to apply this method to Lidar based detections? Is similar performance boosting expected? Clarity, Quality, Novelty And Reproducibility Quality and novelty: This paper could be important for practical applications. However, due to the weakness mentioned above, its novelty is limited. Reproducibility: The proposed method is clean, simple and thus should be easy to reproduce with some parameter tuning. Clarity: The paper is easy to follow. The writing can be further improved: CV method is used multiple times, but no explanation for it. What is CV short for? Constant Velocity? Center and Velocity? No parenthesis for the citations. ByteTrack is not discussed and cited clearly in the related works and proposed method. Some claims are not supported by arguments (Weakness 3)
ICLR
Title Quality Matters: Embracing Quality Clues for Robust 3D Multi-Object Tracking Abstract 3D Multi-Object Tracking (MOT) has achieved tremendous achievement thanks to the rapid development of 3D object detection and 2D MOT. Recent advanced works generally employ a series of object attributes, e.g., position, size, velocity, and appearance, to provide the clues for the association in 3D MOT. However, these cues may not be reliable due to some visual noise, such as occlusion and blur, leading to tracking performance bottleneck. To reveal the dilemma, we conduct extensive empirical analysis to expose the key bottleneck of each clue and how they correlate with each other. The analysis results motivate us to efficiently absorb the merits among all cues, and adaptively produce an optimal tacking manner. Specifically, we present Location and Velocity Quality Learning, which efficiently guides the network to estimate the quality of predicted object attributes. Based on these quality estimations, we propose a quality-aware object association (QOA) strategy to leverage the quality score as an important reference factor for achieving robust association. Despite its simplicity, extensive experiments indicate that the proposed strategy significantly boosts tracking performance by 2.2% AMOTA and our method outperforms all existing state-of-the-art works on nuScenes by a large margin. Moreover, QTrack achieves 48.0% and 51.1% AMOTA tracking performance on the nuScenes validation and test sets, which significantly reduces the performance gap between pure camera and LiDAR based trackers. 1 INTRODUCTION 3D Multi-Object Tracking (MOT) has been recently drawing increasing attention since it is widely applied to 3D perception scenes, e.g., autonomous driving, and automatic robot. The 3D MOT task aims at locating objects and associating the targets of the same identities to form tracklets. According to the used sensors, existing 3D MOT methods can mainly be categorized into two classes, i.e., camera-based and LiDARbased schemes. In this paper, we mainly delve into the camera-only scheme since it contains semantic information and is more economical. Existing 3D MOT methods mostly adopt the tracking-by-detection paradigm. In this regime, a 3d detector is firstly employed to predict 3D boxes and the corresponding classification scores, and then some post-processing methods (e.g., motion-based Kalman (1960) or appearance-based) are used to line detected targets to form trajectories. In the camera scheme, it is natural to extract objects’ discriminative appearance features Chaabane et al. (2021); Hu et al. (2022) to represent targets and use the features to measure the similarities among detected targets. However, the procedure of extracting the appearance feature is cumbersome since it requires predicting high-dimensional embedding, which is hard for joint training due to the optimization contradiction between the detection and embedding branches Yu et al. (2022b). Moreover, it is difficult to deal with the notorious occlusion and motion blur issues. Some other methods Weng et al. (2020a); Pang et al. (2021) build a motion model (Kalman Filter) to obtain some desired states of tracking clues (e.g, center position, size, size ratio, or rotation) by a linear motion assumption. Nevertheless, this process involves various hyper-parameters (e.g., initialization uncertainty of measurement, state and process, etc.) and executes complex matrix transpose operation. Different from the aforementioned methods, CenterPoint Yin et al. (2021) reasonably leverages predicted center locations and velocities of targets for building motion. In detail, it uses time lag between two moments of observations to multiply the predicted velocity for linear location prediction. Afterwards, the L2 distance among targets acts as a measurement metric for the association procedure. For simplicity, we call this tracking framework CV method. It shows effectiveness to achieve remarkable tracking performance, while only conducting a simple operation (i.e., matrix addition and multiplication) for parallel cost computation. Although the CV framework shows efficiency for 3D MOT tasks, it relies heavily on the predicted quality of center location and velocity. The requirement may be harsh for the 3D base detector, since estimating the center location and velocity of an object from a single image is exactly an ill-posed problem. As shown in Fig. 1, notorious occlusion, motion blur, and the illumination of external issues will significantly disturb the estimation performance. To further confirm this issue, we conduct an empirical analysis to study the predicted center location and velocity quality distribution as well as their correlations. Our study reveals two valuable points: (1) There exists a significant gap between the estimation error of 3D centers and that of velocities; (2) The predicted quality of location and velocity is extremely misaligned. The imbalanced tracking cues have little effect on the detection performance but play a dramatic role in MOT. The analysis cues motivate us to endow each predicted box with the self-diagnosis ability to tracking clues for realizing stable tracking association. To this end, we propose to forecast the quality of tracking clues from the base 3D detector. Specifically, we introduce a Normalized Gaussian Quality (NGQ) metric with two dimensions to measure the quality of predicting center location and velocity. NGQ metric comprehensively considers the vector errors of the two predictions in a 2D vector space, which is a prerequisite for our tracking framework. Based on the quality estimation of NGQ, we design a robust association mechanism, i.e, the Quality-aware Object Association (QOA) strategy. It adopts the velocity quality to filter out low-quality motion candidates, and leverages the location quality to further rule out center positions of boxes with bad estimations. Therefore, QOA not only effectively deals with hard cases but also avoids dangerous associations. In a sense, our method is subordinate to the idea of ”Put Quality Before Quantity” principle. By combining the proposed methods with the baseline 3D detector, we obtain a simple and robust 3D MOT framework, namely quality-aware 3D tracker (QTrack). We conduct extensive experiments on nuScenes dataset Caesar et al. (2020), showing significant improvements in the 3D MOT task. Comprehensively, the contributions of this work are summarized as follows: • We conduct extensive empirical analysis to point out that the predicted quality of center location and velocity exist a large distribution gap and misalignment relationship, making an efficient CV tracking framework fall into sub-optimal performance. • We first propose to predict the quality of velocity and location quality measured by our designed NGQ metric. Afterwards, we further introduce QOA to leverage the two qualities for insuring safe association in 3D MOT task. • The overall 3D MOT framework (QTrack) achieves SOTA performance on nuScenes dataset which outperforms other camera-based methods by a large margin. Specially, QOA improves the baseline tracker by +2.2% AMOTA among several 3D detector settings, showing its effectiveness. 2 RELATED WORK 2.1 3D MULTI-OBJECT TRACKING Thanks to the development of 3D detection Huang et al. (2021); Li et al. (2022a); Liu et al. (2022) and 2D MOT technologies Han et al. (2022); Yu et al. (2022b;a); Zhang et al. (2022b), recent 3D MOT methods Weng et al. (2020a); Yin et al. (2021); Chaabane et al. (2021); Hu et al. (2022); Pang et al. (2021) mainly follow tracking-by-detection paradigm. These trackers following this paradigm first utilize a 3D object detector to localize the targets in the 3D space (including location, rotation, and velocity) and then associate the detected objects with the trajectories according to various cues (location or appearance). Traditional 3D MOT usually uses a motion model (Kalman filter) to predict the location of the tracklets and then associate the candidate detections using 3D (G)IoU Weng et al. (2020a); Pang et al. (2021) or L2 distance Yin et al. (2021). Some works also utilize advanced appearance model (ReID) Chaabane et al. (2021); Weng et al. (2020b); Chaabane et al. (2021) or temporal model (LSTM) Marinello et al. (2022); Hu et al. (2022) to provide more reference cues for the association. Recently, Transformer Vaswani et al. (2017) has been used in 3D detection Wang et al. (2022) and MOT Li & Jin (2022); Zhang et al. (2022a) to learn 3D deep representations with 2D visual information and trajectory encoded. Although these methods achieved remarkable performance, when they are applied to complex scenarios (e.g., occlusion, motion blur, or light weakness), the tracking performance becomes unsatisfactory. In this work, we argue that a simple velocity clue with quality estimation can deal with the corner cases and achieve robust tracking performance. Our proposed QTrack focuses on how to assess the quality of the location and velocity prediction, and then make full use of these quality scores in the matching process. 2.2 PREDICTION QUALITY ESTIMATION To estimate the quality of model’s prediction is non-trivial, which can be applied to tackle prediction imbalance or decision-making. In the field of object detection, advanced works Wang et al. (2021); Tian et al. (2019); Jiang et al. (2018) introduce to predict a box’s centerness or IoU for perceiving the quality of prediction (3D) boxes. Huang et al. (2019) employ the method to perceive the mask predicted quality. These methods can alleviate the imbalance between classification score and location accuracy. Li et al. (2022c) introduces an uncertainty-based method to estimate the predicted quality of several depth factors, and then the quality is employed to make optimal decisions. In this paper, we introduce to predict the predicted quality of velocity and location. Afterwards, the predicted quality will be used to eliminate the non-robust association case of tracking task. To our knowledge, our work is the first effort to perceive the velocity and location qualities for the decision-making in 3D MOT task. 2.3 MULTI-VIEW 3D OBJECT DETECTION 3D object detection is the predecessor task for 3D MOT task. It can be split into two stream methods including point-based Lang et al. (2019); Yan et al. (2018); Yin et al. (2021); Shi et al. (2019; 2020); Yang et al. (2022c) and camera-based detectors Wang et al. (2021); Huang et al. (2021); Li et al. (2022a); Wang et al. (2022); Liu et al. (2022); Li et al. (2022b). In this paper, we focus on the 3D MOT for the multi-view camera based framework, which has made tremendous advance. Transformer based methods Wang et al. (2022); Liu et al. (2022); Li et al. (2022b) introduce 3D object queries to interact with the multi-view image feature map. 3D object queries are constantly refined to predict 3D boxes and other tasks in an end-to-end manner. BEVDet Huang et al. (2021) and BEVDepth Li et al. (2022a) directly project the multi-view image feature into BEV representation and attach a center-based head Yin et al. (2021) to conduct detection task. Standing on the shoulders of giants, we aim to equip BEVDepth with the ability to perceive the quality of velocity and center locatopn, which is the key to diagnose non-robust association for tracking. Then we introduce a novel “tracking by detection” (QTrack) to endow BEVDepth with effiective and efficient tracking. 3 METHODOLOGY 3.1 DELVE INTO THE QUALITY DISTRIBUTION We aim to solve the task of 3D multi-object tracking (3D MOT), the goal of which is to locate the objects in the 3D space and then associate the detected targets with the same identity into the tracklets. The key challenge is how to associate the tracklets efficiently and correctly. In contrast to the motion-based and appearance-based association strategies, we argue that the simple velocity clue (CV method) is enough for the association, which is more lightweight and deployment-friendly. However, the performance of the existing CV tracking framework is not satisfactory. To analyze the reason for the limited performance of tracking with velocity, we count and visualize the distribution of the prediction error between location and velocity. As illustrated in Fig. 2 (a) and (b), we can observe that the distribution of the location and velocity quality (prediction error) is scattered, and a sizable number of low-quality boxes are included. Moreover, Fig. 2 (c) shows that the distribution correlation between the location and velocity error is nonlinear, which means the quality of the location and velocity is seriously misaligned. Based on these observations, we conclude that the limited performance of tracking with velocity is due to the following reasons: (1) Low quality of the location or velocity. When one of the location and velocity predictions is not accurate enough, the tracker can not perform well even if the other prediction is reliable. (2) Misalignment between the quality of location and velocity. We should take both location and velocity quality into consideration. Driven by this analysis, we propose Location and Velocity Quality Learning to learn the quality uncertainty of the location, and velocity that can assist the tracker to select high-quality candidates for the association. 3.2 BASE 3D OBJECT DETECTOR Our method can be easily coupled with most existing 3D object detectors with end-to-end training. In this paper, we take BEVDepth Li et al. (2022a) as an example. BEVDepth is a camera-based Bird’s-Eye-View (BEV) 3D object detector that transfers the multi-view image features to the BEV feature through a depth estimation network and then localizes and classifies the objects in the BEV view. It consists four kinds of modules: an image-view encoder, a view transformer with explicit depth supervision utilizing encoded intrinsic and extrinsic parameters, a BEV encoder and a taskspecific head. The entire network is optimized with a multi-task loss function: Ldet = Ldepth + Lcls + Lreg, (1) where the depth loss Ldepth, classification loss Lcls and regression loss Lreg remain the same setting as the original paper. As illustrated in Fig. 3, the task of the regression branch includes heatmap, offsets, height, size, rotation and velocity. 3.3 LOCATION AND VELOCITY QUALITY LEARNING To effectively estimate the quality of location and velocity, it first needs to define the quality measurement metric. Technically, the box’s center location is calculated by incorporating predicted heatmap and corresponding offsets so that the location quality can be simplified to offset predicted quality. Specially, the offsets and velocity are defined in a 2-dimensional vector space. We introduce a Normalized Gaussian Quality (NGQ) metric to represent their quality. Given a predicted vector P ∈ R2 and ground truth vector G ∈ R2, we formulate NGQ metric as: NGQ = e− √ (Px−Gx)2+(Py−Gy)2 γ , (2) where the subscripts x and y indicate the value in the x and y directions while γ is a hyperparameter to control the value distribution of NGQ. We set γ to 1.0 and 3.0 for location and velocity, respectively. P and G can be instantiated as predicting offset and velocity. When the prediction is equal to ground truth, NGQ = 1, while the predicted error is larger, NGQ is closer to 0. After defining the quality, we elaborate on how to learn it. As shown in Fig. 3, we attach a 3×3 convolution layer for offset and velocity branch to predict location quality NGQloc ∈ R1 and velocity quality NGQvel ∈ R1, respectively. The quality supervision is conducted by binary cross entropy (BCE) loss: Lquality = − 1 N N∑ i=1 [ ˆNGQi · log NGQi + (1−NGQi) · log (1− ˆNGQi)], (3) where ˆNGQ is the ground truth quality calculated by Eq. 2. This far, the total loss for our detector is formulated as: Ltotal = Ldet + Lquality. (4) The overall training procedure is an end-to-end manner while the quality prediction task will not damage the performance of the base detector. Moreover, the quality estimation is used in our proposed Quality-aware Object Association (QOA) module, which will be discussed next section. 3.4 QUALITY-AWARE OBJECT ASSOCIATION After obtaining the quality of the center location and velocity, we have more reference cues to achieve robust and accurate association. To this end, we propose a simple but effective qualityaware object association strategy (QOA). Specifically, QOA sets up two ”gates”. The first gate is the classification confidence score (cls score). We first separate the candidate detection boxes into high score ones and low score ones according to their cls scores. The high score candidates are first associated with the tracklets. Then the unmatched tracklets are associated with the low score candidates. These low score candidates are most caused by occlusion, motion blur, or light weakness, which are easily confused with the miscellaneous boxes. To deal with the issue, the second gate, quality uncertainty score, is introduced. After getting the second association results between the unmatched tracklets and the low score candidates, we then recheck the matched trackdet pairs according to the location and velocity quality scores. Only high-quality matched track-det pairs can remain and low-quality pairs are regarded as the mismatch. The pseudo-code of QOA is shown in Algorithm 1. Benefiting from the quality estimation, QOA does not need a complex motion or appearance model to provide association cues. A simple velocity prediction (CV) is enough (line #15). Hence, we use the velocity of the tracklet at frame t − 1 to predict the center location at frame t and then Algorithm 1: Pseudo-code of QOA. Input: A video sequence V; object detector Det; detection score threshold τ ; quality score threshold µv , µt Output: Tracks T of the video 1 Initialization: T ← ∅ 2 for frame fk in V do /* boxes & scores */ 3 Dk ← Det(fk) 4 Dhigh ← ∅ 5 Dlow ← ∅ /* first gate */ 6 for d in Dk do 7 if d.score > τ then 8 Dhigh ← Dhigh ∪ {d} 9 end 10 else 11 Dlow ← Dlow ∪ {d} 12 end 13 end /* predict location */ 14 for t in T do 15 t← CV(t) 16 end /* association with high scores */ 17 Associate T and Dhigh using L2 distance 18 Dremain ← remaining object boxes from Dhigh 19 Tremain ← remaining tracks from T /* association with low scores */ 20 Associate Tremain and Dlow using L2 distance 21 Tsec,Dsec ← matched pairs from Tremain ,Dlow 22 Tre−remain ← remaining tracks from Tremain /* second gate */ 23 for t, d in Tsec, Dsec do 24 if t.vscore < µv or d.lscore < µt then 25 Tre−remain ← Tre−remain ∪ {t} 26 end 27 end /* update and initialize */ 28 T ← T \ Tre−remain 29 for d in Dremain do 30 T ← T ∪ {d} 31 end 32 end 33 Return: T compute the L2 distance between predictions and candidate detections (line #17 and line #20) as the similarity. At last, we apply the similarity with the Hungarian algorithm to get the association results. Mathematically, ct = ct−1 + vt−1∆t cost = L2(ct, dt) match = Hungarian(cost), (5) where ct−1, vt−1 represents the center location and velocity of the tracklets at frame t− 1. dt is the candidate detection center location at frame t and ∆t is the time lag. 4 EXPERIMENTS 4.1 DATASETS AND METRICS Datasets. We mainly evaluate our QTrack on the 3D detection and tracking datasets of nuScenes. nuScenes dataset is a large-scale autonomous driving benchmark that consists of 1000 real-world sequences, 700 sequences for training, 150 for validation, and 150 for the test. Each sequence has roughly 40 keyframes, which are annotated by each sensor (e.g., LiDAR, Radar, and Camera) with a sampling rate of 2 FPS. Each frame includes images from six cameras with a full 360-degree field of view. For the detection task, there are 1.4 M annotated 3D bounding boxes from 10 categories. For the tracking task, it provides 3D tracking bounding boxes from 7 categories. Metrics. For 3D detection task, we report nuScenes Detection Score (NDS), mean Average Prediction (mAP), as well as five True Positive (TP) metrics including mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE). For 3D tracking task, we report Average Multi-object Tracking Accuracy (AMOTA) and Average Multi-object Tracking Precision (AMOTP). We also report metrics used in 2D tracking task from CLEAR Bernardin et al. (2006), e.g., MOTA, MOTP, and IDS. 4.2 IMPLEMENTATION DETAILS Following BEVdepth, we adopt three types of backbone: ResNet-50 He et al. (2016), ResNet-101, and VoVNet-99 (Initialized from DD3D Park et al. (2021)) as the image backbone. If not specified, the image size is processed to 256×704. The data augmentation includes random cropping, random scaling, random flipping, and random rotation. In addition, we also adopt BEV data augmentations including random scaling, random flipping, and random rotation. We use AdamW as optimizer with learning rate of 2 × 10−4 and batch size of 64. When compared with other methods, QTrack is trained for 24 epochs for ResNet and 20 epochs for VoVNet with CBGS Zhu et al. (2019). 4.3 COMPARISION WITH PRECEDING SOTAS Test and validation set. We compare the performance of QTrack with preceding SOTA methods on the nuScenes benchmark. The results are reported in Tab. 1. Our QTrack outperforms all current SOTA methods for the camera-based trackers by a large margin. For both validation and test sets, all reported metrics (e.g., AMOTA, AMOTP, RECALL, IDS, etc.) achieve best performance. Specially, AMOTA result of QTrack first achieves 0.511, which significantly reduces the performance gap between the pure camera and LiDAR-based trackers. Compare with other post-processing trackers. Tab. 2 illustrates that QTrack outperforms the naive Kalman filter based method and its advanced variant from SimpleTrack Pang et al. (2021) by employing identical 3D detector and backbone settings. Moreover, our method only needs simple operations (i.e., Matrix multiplication and addition) for tracking procedure, while Kalman filter based ones need relatively complex operation like matrix transpose and the complex process for adjusting hyper-parameters. The overall tracking framework is significantly efficient and will not trigger a serious latency, which is fatal in a real perception scenario Yang et al. (2022a;b). 4.4 ABLATION STUDY In this subsection, we verify the effectiveness of the proposed strategies separately through ablation studies. All the experiments are conducted on the nuScenes val set. Analysis of the location and velocity quality for tracking. In this part, we conduct an in-depth analysis on the location and velocity quality score for the association process. As mentioned before, location and velocity quality scores are obtained by the quality branch. Then they are both regarded as the reference clues to filter the low classification confidence association results in QOA. We verify the performance of only using one of them as the second gate of QOA, and the results are reported in Tab. 3. As shown, only using one of the location and velocity quality scores does not contribute to the tracking performance, which confirms our analysis that the location and velocity quality is not aligned and we should take both of them into consideration. Analysis of the components of QTrack. In this part, we verify the effectiveness of various components in QTrack through an ablation study. As shown in Tab. 4, the first row of the table shows baseline performance for tracking when using BEVDepth detections followed by a simple velocity association step (CV method). We can observe that the two gates of QOA can both develop the tracking performance in the all settings (ResNet-50, ResNet-101 or VoVNet-99, single-frame or multi-frame), which means that the filter for the low-quality association results is necessary. Furthermore, we can observe that the metric of IDS increases when applying the first gate by classification confidence score. This phenomenon shows that only considering confidence score inevitably introduces low-quality bounding boxes, which causes bad association cases. Therefore, the second gate, quality score, can provide a fine-grained reference to achieve a better association trade-off. Influence on base 3D detector. As shown in Tab. 5, it proves that adding quality prediction branch does not affect the performance of base 3D detector. This is an extremely important property since post-processing trackers normally rely on the super performance of detector. Going one step further, we report the tracking performance by employing existing CV and SimpleTrack scheme. It reveals that tracking performance will not be affected by our quality branch, which agrees with our designing purpose of Sec. 1. Then, we explore to append an appearance branch for extracting instance wised appearance embedding, which implement is the same as Zhang et al. (2021). The results show that slight performance degradation (nearly 0.5%) is triggered on detection task, but it significantly damages the performance of tracking task by nearly 1.0%. It reflects that our method is more effective and efficient. 4.5 DISCUSSION AND FUTURE WORK Inspired by Jiang et al. (2018); Wu et al. (2020); Yang et al. (2022d), we explore to incorporate velocity quality V with classification score C as M , which is adopted to act as threshold metric in NMS procedure. Technically, we formulate M in Eq. 6, in which α is a hyper-parameter to control the contribution of V . M = V (1−α) · Cα, (6) As shown in Fig. 4, we plot the four performance metrics of detection task by controlling α. It reflects that as contribution of V becomes bigger, mAVE drops dramatically. However, it also brings about inevitable performance degradation for mAP and mATE metrics. NDS, as a comprehensive metric, becomes better and then gets worse as α changes larger, which is actually a trade-off between location error and velocity error. This phenomenon agrees with our viewpoint in Sec. 1, i.e., the quality of these two prediction tasks are not aligned. Combining the performance of detection and tracking tasks with respect to above imbalance issue, it exposes a challenge: how to design a method to simultaneously predict location (or 3D box) and velocity well? This challenge can help further boost performance of 3D detection task or other downstream tasks like 3D MOT. 5 CONCLUSION In this paper, we analyze the imbalance prediction quality distribution of location and velocity. It motivates us to propose a Quality-aware Object Association (QOA) method to alleviate the imbalance issue for 3D multi-object tracking (3D MOT). To this end, we introduce Normalized Gaussian Quality (NGQ) metric to measure the predicted quality of location and velocity, and structure an effective module for quality learning. Afterwards, we further present QTrack, an “tracking by detection” framework for 3D MOT in multi-view camera scene, which incorporats with QOA to perform tracking procedure. The extensive experiments demonstrate the efficacy and robustness of our method. Finally, we release a challenge to inspire more research to focus on the imbalance between localization and velocity qualities for both 3D detection and tracking tasks.
1. What is the focus of the paper regarding occlusion and motion blur? 2. How does the proposed method address hard cases in the association stage? 3. What are the strengths and weaknesses of the proposed algorithm, particularly its performance and quality estimation module? 4. Do you have any concerns about the results presented in Table 2? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper To deal with hard cases (e.g. occlusion, motion blur) in the association stage, the authors propose to learn quality scores of offset and velocity. The ground truth quality score is obtained by calculating a Gaussian normalized distance between the ground truth vector of offset (or velocity) and the predicted offset (or velocity). A two-stage association strategy is provided as well. The proposed algorithm is evaluated on the nuScenes dataset and achieves promising performance. Strengths And Weaknesses Strengths: Propose to estimate the quality of the predicted offset and velocity, and the performance of the tracker is state-of-the-art. Weaknesses: The classification score is used to divide the two stages in the association step first, maybe the classification score is a good indicator of the prediction quality. The quality estimation module just contains two additional regression heads, without exploiting extra information and a specific experimental study, it is difficult to say that the proposed quality estimation approach is fundamentally better than the classification score itself. The results in Table 2 are not quite convincing. The tracking-by-detection paradigm depends on the detection performance notably, BEVDepth itself with ResNet-101 is better than ResNet-50 with a large margin, but the tracking performance does not follow the same trend. Maybe, the authors need to tune the algorithms carefully. Clarity, Quality, Novelty And Reproducibility The paper is well written in general.
ICLR
Title Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds Abstract Eliciting labels from crowds is a potential way to obtain large labeled data. Despite a variety of methods developed for learning from crowds, a key challenge remains unsolved: learning from crowds without knowing the information structure among the crowds a priori, when some people of the crowds make highly correlated mistakes and some of them label effortlessly (e.g. randomly). We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. Max-MIG simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth. To the best of our knowledge, this is the first algorithm that solves the aforementioned challenge of learning from crowds. In addition to the theoretical validation, we also empirically show that our algorithm achieves the new state-of-the-art results in most settings, including the real-world data, and is the first algorithm that is robust to various information structures. 1 INTRODUCTION Lack of large labeled data is a notorious bottleneck of the data-driven-based machine learning paradigm. Crowdsourcing provides a potential solution to this challenge: eliciting labels from crowds. However, the elicited labels are usually very noisy, especially for some difficult tasks (e.g. age estimation, medical images annotation). In the crowdsourcing-learning scenario, two problems are raised: (i) how to aggregate and infer the ground truth from the imperfect crowdsourced labels? (ii) how to learn an accurate data classifier with the imperfect crowdsourced labels? One conventional solution to the two problems is aggregating the crowdsourced labels using majority vote and then learning a data classifier with the majority answer. However, this naive method will cause biased results when the task is difficult and the majority of the crowds label randomly or always label a particular class (say class 1) effortlessly. Another typical solution is aggregating the crowdsourced labels in a more clever way, like spectral method (Dalvi et al., 2013; Zhang et al., 2014), and then learning with the aggregated results. This method avoids the above flaw that the majority vote method has, as long as their randomnesses are ∗Equal Contribution. mutually independent. However, the spectral method requires that the experts’ labeling noise are mutually independent, which often does not hold in practice since some experts may make highly correlated mistakes (see Figure 2 for example). Moreover, the above solutions aim to train an accurate data classifier and do not provide a method that can employ both the data and the crowdsourced labels to forecast the ground truth. A common assumption in the learning from crowds literature is that conditioning on the ground truth, the crowdsourced labels and the data are independent, as shown in Figure 1 (a). Under this assumption, the crowdsourced labels correlate with the data due to and only due to the ground truth. Thus, this assumption tells us the ground truth is the “information intersection” between the crowdsourced labels and the data. This “information intersection” assumption does not restrict the information structure among the crowds i.e. this assumption still holds even if some people of the crowds make highly correlated mistakes. We present several possible information structures under the “information intersection” assumption in Figure 1 (b). The majority vote will lead to inaccurate results in all cases if the experts have different levels of expertise and will induce extremely biased results in case (2) when a large number of junior experts always label class 1. The approaches that require the experts to make independent mistakes will lead to biased results in case (3), when the experts make highly correlated mistakes In this paper, we propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. To the best of our knowledge, this is the first algorithm that is both theoretically and empirically robust to the situation where some experts make highly correlated mistakes and some experts label effortlessly, without knowing the information structure among the experts. Our algorithm simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. In addition, we propose a method to learn an accurate data-crowds forecaster that can employ both the data and the crowdsourced labels. At a high level, our algorithm trains a data classifier and a crowds aggregator simultaneously to maximize their “mutual information”. This process will find the “information intersection” between the data and crowdsourced labels i.e. the ground truth labels. The data-crowds forecaster can be easily constructed from the trained data classifier and the trained crowds aggregator. This algorithm allows the conditional dependency among the experts as long as the intersection assumption holds. We design the crowds aggregator as the “weighted average” of the experts. This simple “weighted average” form allows our algorithm to be both highly efficient in computing and theoretically robust to a large family of information structures (e.g. case (1), (2), (3) in Figure 1 (b)). Particularly, our algorithm works when there exists a subset of senior experts, whose identities are unknown, such that these senior experts have mutually independent labeling biases and it is sufficient to only use the seniors’ information to predict the ground truth label. For other junior experts, they are allowed to have any dependency structure among themselves or between them and the senior experts. 2 RELATED WORK A series of works consider the learning from crowds problem and mix the learning process and the aggregation process together. Raykar et al. (2010) reduce the learning from crowds problem to a maximum likelihood estimation (MLE) problem, and implement an EM algorithm to jointly learn the expertise of different experts and the parameters of a logistic regression classifier. Albarqouni et al. (2016) extend this method to combine with the deep learning model. Khetan et al. (2017) also reduce the learning problem to MLE and assume that the optimal classifier gives the ground truth labels and the experts make independent mistakes conditioning on the ground truth. Unlike our method, these MLE based algorithms are not robust to correlated mistakes. Recently, Guan et al. (2017) and Rodrigues & Pereira (2017) propose methods that model multiple experts individually and explicitly in a neural network. However, their works lack theoretical guarantees and are outperformed by our method in the experiments, especially in the naive majority case. Moreover, unlike our method, their methods cannot be used to employ both the data and the crowdsourced labels to forecast the ground truth. Several works focus on modeling the experts. Whitehill et al. (2009) model both expert competence and image difficulty, but did not consider expert bias. Welinder et al. (2010) model each expert as a multidimensional classifier in an abstract feature space and consider both the bias of the expert and the difficulty of the image. Rodrigues et al. (2014) model the crowds by a Gaussian process. Khetan & Oh (2016); Shah et al. (2016) consider the generalized Dawid-Skene model (Dawid & Skene, 1979) which involves the task difficulty. However, these works are still not robust to correlated mistakes. We model the crowds via the original Dawid-Skene model and do not consider the task difficulty, but we believe our Max-MIG framework can be incorporated with any model of the experts and allow correlated mistakes. Our method differs from the works that focus on inferring ground truth answers from the crowds’ reports and then learn the classifier with the inferred ground truth (e.g. (Dawid & Skene, 1979; Zhou et al., 2012; Liu et al., 2012; Karger et al., 2014; Zhang et al., 2014; Dalvi et al., 2013; Ratner et al., 2016)) since our method simultaneously infers the ground truth and learns the classifier. In addition, our method provides a data-crowds forecaster while those works do not. Our method is also closely related to co-training. Blum & Mitchell (1998) first propose the co-training framework: simultaneously training two classifiers to aggregate two views of data. Our method interprets joint learning from crowds as a co-training style problem. Most traditional co-training methods require weakly good classifier candidates (e.g. better than random guessing). We follow the general information theoretic framework proposed by Kong & Schoenebeck (2018) that does not have this requirement. However, Kong & Schoenebeck (2018) only provide theoretic framework and assume an extremely high model complexity without considering the over-fitting issue, which is a too strong assumption for practice. Our work apply this framework to the learning from crowds problem and provide the proper design for the model complexity as well as the experimental validations. 3 METHOD In this section, we formally define the problem, introduce our method, Max-MIG, and provide a theoretical validation for our method. Notations For every set A, we use ∆A to denote the set of all possible distributions over A. For every integer M , we use [M] to denote {1,2, . . . ,M}. For every matrix A = (Ai,j)i,j ∈ R+s×t, we define logA as a s × t matrix such that its the (i, j)th entry is log(Ai,j). Similarly for every vector v = (vi)i ∈ R+s, we define logv as a vector such that its the ith entry is log(vi). Problem statement There are N datapoints. Each datapoint x ∈ I (e.g. the CT scan of a lung nodule) is labeled by M experts y[M] ∶= {y1, y2, . . . , yM ∣ym ∈ C} (e.g. C = {benign,malignant}, 5 experts’ labels: {benign, malignant, benign, benign, benign}). The datapoint x and the crowdsourced labels y[M] are related to a ground truth y ∈ C (e.g. the pathological truth of the lung nodule). We are aiming to simultaneously train a data classifier h and a crowds aggregator g such that h ∶ I ↦∆C predicts the ground truth y based on the datapoint x ∈ I , and g ∶ CM →∆C aggregates M crowdsourced labels y[M] into a prediction for ground truth y. We also want to learn a data-crowds forecaster ζ ∶ I × CM ↦∆C that forecasts the ground truth y based on both the datapoint x ∈ I and the crowdsourced labels y[M] ∈ C. 3.1 MAX-MIG: AN INFORMATION THEORETIC APPROACH Figure 3 illustrates the overview idea of our method. Here we formally introduce the building blocks of our method. Data classifier h The data classifier h is a neural network with parametersΘ. Its input is a datapoint x and its output is a distribution over C. We denote the set of all such data classifers by HNN . Crowds aggregator g The crowds aggregator g is a “weighted average” function to aggregate crowdsourced labels with parameters {Wm ∈ R∣C∣×∣C∣}Mm=1 and b. Its input y[M] is the crowdsourced labels provided by M experts for a datapoint and its output is a distribution over C. By representing each ym ∈ y[M] as an one-hot vector e(ym) ∶= (0, . . . ,1, . . . ,0)⊺ ∈ {0,1}∣C∣ where only the ymth entry of e(y m) is 1, g(y[M];{Wm}Mm=1,b) = softmax( M ∑ m=1 Wm ⋅ e(y m) + b) where Wm ⋅ e(ym) is equivalent to pick the ymth column of matrix Wm, as shown in Figure 3. We denote the set of all such crowds aggregators by GWA. Data-crowds forecaster ζ Given a data classifier h, a crowds aggregator g and a distribution p = (pc)c ∈ ∆C over the classes, the data-crowds forecaster ζ, that forecasts the ground truth based on both the datapoint x and the crowdsourced labels y[M], is constructed by ζ(x, y[M];h, g,p) = Normalize((h(x)c ⋅ g(y [M])c pc )c) where Normalize(v) ∶= v∑c vc . f -mutual information gain MIGf f -mutual information gain MIGf measures the “mutual information” between two hypotheses, which is proposed by Kong & Schoenebeck (2018). Given N datapoints x1, x2, . . . , xN ∈ I where each datapoint xi is labeled by M crowdsourced labels i ) ∈ ∆C by “weighted average”. We tune the parameters of h and g simultaneously to maximize their f -mutual information gain. We will show the maximum is the f -mutual information (a natural extension of mutual information, see Appendix C) between the data and the crowdsourced labels. Step 2: aggregating the “information intersection”: after we obtain the best h, g,p that maximizes MIGf(h, g,p), we use them to construct a data-crowds forecaster ζ that forecasts ground truth based on both the datapoint and the crowdsourced labels. To calculate the f -mutual information gain, we reward them for the average “agreements” between their outputs for the same task, i.e. h(xi) and g(y[M]i ) , as shown by the black lines, and punish them for the average “agreements” between their outputs for the different tasks, i.e. h(xi) and g(y[M]j ) where i ≠ j, as shown by the grey lines. Intuitively, the reward encourages the data classifier to agree with the crowds aggregator, while the punishment avoids them naively agreeing with each other, that is, both of them map everything to (1,0, . . . ,0). The measurement of “agreement” depends on the selection of f . See formal definition for MIGf in (1). y1i , y 2 i , . . . , y M i ∈ C, the f -mutual information gain between h and g, associated with a hyperparameter p = (pc)c ∈ ∆C , is defined as the average “agreements” between h and g for the same task minus the average “agreements” between h and g for the different tasks, that is, MIGf({xi},{y[M]i };h, g,p) = 1 N ∑ i ∂f(∑ c∈C h(xi)c ⋅ g(y[M]i )c pc ) (1) − 1 N(N − 1)∑i≠j f⋆ ⎛ ⎝ ∂f(∑ c∈C h(xi)c ⋅ g(y[M]j )c pc ) ⎞ ⎠ where f is a convex function satisfying f(1) = 0 and f⋆ is the Fenchel duality of f . We can use Table 1 as reference for ∂f(⋅) and f⋆(∂f(⋅)). Since the parameters of h is Θ and the parameters of g is {Wm}Mm=1 and b, we naturally rewrite MIGf({xi},{y[M]i };h, g,p) as MIGf({xi},{y[M]i };Θ,{W m}Mm=1,b,p). We seek {Θ,{Wm}Mm=1,b,p} that maximizes MIGf . Later we will show that when the prior of the ground truth is p∗ (e.g. p∗ = (0.8,0.2) i.e. the ground truth is benign with probability 0.8 and malignant with probability 0.2 a priori), the best b and p are logp∗ and p∗ respectively. Thus, we can set b as logp and only tune p. When we have side information about the prior p∗, we can fix parameter p as p∗, and fix parameter b as logp∗. 3.2 THEORETICAL JUSTIFICATION This section provides a theoretical validation for Max-MIG, i.e., maximizing the f -mutual information gain over HNN and GWA finds the “information intersection” between the data and the crowdsourced labels. In Appendix E, we compare our method with the MLE method (Raykar et al., 2010) theoretically and show that unlike our method, MLE is not robust to the correlated mistakes case. Recall that we assume that conditioning on the ground truth, the data and the crowdsourced labels are mutually independent. Thus, we can naturally define the “information intersection” as a pair of data classifier and crowds aggregator h∗, g∗ such that they both fully use their input to forecast the ground truth. Kong & Schoenebeck (2018) shows that when we have infinite number of datapoints and maximize over all possible data classifiers and crowds aggregators, the “information intersection” will maximize MIGf(h, g) to the f -mutual information (Appendix C) between the data and the crowdsourced labels. However, in practice, with a finite number of datapoints, the data classifier and the crowds aggregator space should be not only sufficiently rich to contain the “information intersection” but also sufficiently simple to avoid over-fitting. Later, the experiment section will show that our picked HNN and GWA are sufficiently simple to avoid over-fitting. We assume the neural network space is sufficiently rich. It remains to show that our weighted average aggregator space GWA is sufficiently rich to contain g∗. Model and assumptions Each datapoint xi with crowdsourced labels provided by M experts y1i , ..., y M i are drawn i.i.d. from random variables X,Y 1, ..., YM . Assumption 3.1 (Co-training assumption). X and Y [M] are independent conditioning on Y . Note that we do not assume that the experts’ labels are conditionally mutually independent. We define p∗ ∈ ∆C as the prior for Y , i.e. p∗c = P (Y = c). Definition 3.2 (Information intersection). We define h∗, g∗ and ζ∗ such that h∗(x)c = P (Y = c∣X = x) g∗(y[M])c = P (Y = c∣Y [M] = y[M]). ζ∗(x, y[M])c = P (Y = c∣X = x,Y [M] = y[M]) We call them Bayesian posterior data classifier / crowds aggregator / data-crowds forecaster respectively. We call (h∗, g∗) the information intersection between the data and the crowdsourced labels. We also assume the neural network space is sufficiently rich to contain h∗. Assumption 3.3 (Richness of the neural networks). h∗ ∈HNN . Theorem 3.4. With assumptions 3.1, 3.3, when there exists a subset of experts S ⊂ [M] such that the experts in S are mutually independent conditioning on Y and Y S is a sufficient statistic for Y , i.e. P (Y = y∣Y [M] = y[M]) = P (Y = y∣Y S = yS) for every y ∈ C, y[M] ∈ CM , then (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f -mutual information between X and Y [M]. Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Our main theorem shows that if there exists a subset of senior experts such that these senior experts are mutually conditional independent and it is sufficient to only use the information from these senior experts, then Max-MIG finds the “information interstion”. Note that we do not need to know the identities of the senior experts. For other junior experts, we allow any dependency structure among them and between them and the senior experts. Moreover, this theorem also shows that our method handles the independent mistakes case where all experts can be seen as senior experts (Proposition D.3). To show our results, we need to show that GWA contains g∗, i.e. there exists proper weights such that g∗ can be represented as a weighted average. In the independent mistakes case, we can construct each expert’s weight using her confusion matrix. Thus, in this case, each expert’s weight represents her expertise. In the general case, we can construct each senior expert’s weight using her confusion matrix and make the junior experts’ weights zero. Due to space limitation, we defer the formal proofs to Appendix D. 4 EXPERIMENT In this section, we evaluate our method on image classification tasks with both synthesized crowdsourced labels in various of settings and real world data. Our method Max-MIG is compared with: Majority Vote, training the network with the major vote labels from all the experts; Crowd Layer, the method proposed by Rodrigues & Pereira (2017); Doctor Net, the method proposed by Guan et al. (2017) and AggNet, the method proposed by Albarqouni et al. (2016). Image datasets Three datasets are used in our experiments. The Dogs vs. Cats (Kaggle, 2013) dataset consists of 25,000 images from 2 classes, dogs and cats, which is split into a 12,500-image training set and a 12,500-image test set. The CIFAR-10 (Krizhevsky et al., 2014) dataset consists of 60,000 32 × 32 color images from 10 classes, which is split into a 50,000-image training set and a 10,000-image test set. The LUNA16 (Setio et al., 2016) dataset consists of 888 CT scans for lung nodule. We preprocessed the CT scans by generating 8106 50 × 50 gray-scale images, which is split into a 6484-image training set and a 1622-image testing set. LUNA16 is highly imbalanced dataset (85%, 15%). Synthesized crowdsourced labels in various of settings For each information structure in Figure 1, we generate two groups of crowdsourced labels for each dataset: labels provided by (H) experts with relatively high expertise; (L) experts with relatively low expertise. For each of the situation (H) (L), all three cases have the same senior experts. Case 4.1. (Independent mistakes) Ms senior experts are mutually conditionally independent. Case 4.2. (Naive majority) Ms senior experts are mutually conditional independent, while other Mj junior experts label all datapoints as the first class effortlessly. Case 4.3. (Correlated mistakes) Ms senior experts are mutually conditional independent, and each junior expert copies one of the senior experts. Real-world dataset The LabelMe data (Rodrigues & Pereira, 2017; Russell et al., 2008) consists of a total of 2688 images, where 1000 of them were used to obtain labels from multiple annotators from Amazon Mechanical Turk and the remaining 1688 images were using for evaluating the different approaches. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. Networks We follow the four layers network in Rodrigues & Pereira (2017) on Dogs vs. Cats and LUNA16 and use VGG-16 on CIFAR-10 for the backbone of the data classifier h. For Labelme data, we apply the same setting of Rodrigues & Pereira (2017): we use pre-trained VGG-16 deep neural network and apply only one FC layer (with 128 units and ReLU activations) and one output layer on top, using 50% dropout. We defer other implementation details to appendix B. Table 2: Accuracy on LabelMe (real-world crowdsourced labels) Method Majority Vote Crowd Layer Doctor Net AggNet Max-MIG Accuracy 80.41 ± 0.56 83.65 ± 0.50 80.56 ± 0.59 85.20 ± 0.26 86.42 ± 0.36 Figure 4: Results on Dogs vs. Cats, CIFAR-10, LUNA16. 4.1 RESULTS We train the data classifier h on the four datasets through our method1 and other related methods. The accuracy of the trained data classifiers on the test set are shown in Table 2 and Figure 4. We also show the accuracy of our data-crowd forecaster and on the test set and compare it with AggNet (Table 3). For the performances of the trained data classifiers, our method Max-MIG (red) almost outperform all other methods in every experiment. For the real-world dataset, LabelMe, we achieve the new state-of-the-art results. For the synthesized crowdsourced labels, the majority vote method (grey) fails in the naive majority situation. The AggNet has reasonably good performances when the experts are conditionally independent, including the naive majority case since naive expert is independent with everything, while it is outperformed by us a lot in the correlated mistakes case. This matches the theory in Appendix E: the AggNet is based on MLE and MLE fails in correlated mistakes case. The Doctor Net (green) and the Crowd Layer (blue) methods are not robust to the naive majority case. Our data-crowds forecaster (Table 3) performs better than our data classifier, which shows that our data-crowds forecaster actually takes advantage of the additional information, the crowdsourced labels, to give a better result. Like us, Aggnet also jointly trains the classifier and the aggregator, and can be used to train a data-crowds forecaster. We compared our data-crowds forecaster with Aggnet. 1The results of Max-MIG are based on KL divergence. The results for other divergences are similar. The results still match our theory. When there is no correlated mistakes, we outperform Aggnet or have very similar performances with it. When there are correlated mistakes, we outperform Aggnet a lot (e.g. +30%). Recall that in the experiments, for each of the situation (H) (L), all three cases have the same senior experts. Thus, all three cases’ crowdsourced labels have the same amount of information. The results show that Max-MIG has similar performances for all three cases for each of the situation (H) (L), which validates our theoretical result: Max-MIG finds the “information intersection” between the data and the crowdsourced labels. 5 CONCLUSION AND DISCUSSION We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. We provide theoretical validation to our approach and compare our approach experimentally with previous methods (Doctor net (Guan et al., 2017), Crowd layer (Rodrigues & Pereira, 2017), Aggnet (Albarqouni et al., 2016)) under several different information structures. Each of the previous methods is not robust to at least one information structure and our method is robust to all and almost outperform all other methods in every experiment. To the best of our knowledge, our approach is the first algorithm that is both theoretically and empirically robust to the situation where some people make highly correlated mistakes and some people label effortlessly, without knowing the information structure among the crowds. We also test our method on real-world data and achieve the new state-of-the-art result. Our current implementation of Max-MIG has several limitations. For example, we implement the aggregator using a simple linear model, which cannot handle the case when the senior experts are latent and cannot be linearly inferred from the junior experts. However, note that if the aggregator space is sufficiently rich, the Max-MIG approach is still able to handle any situation as long as the “information intersection” assumption holds. One potential future direction is designing more complicated but still trainable aggregator space. ACKNOWLEDGMENTS We would like to express our thanks for support from the following research grants NSFC-61625201 and 61527804. A DATA-CROWDS FORECASTER COMPARISON Here (dc) is the shorthand for data-crowds forecaster and (d) is the shorthand for data-classifier. We take the average of five times experiments and the variance is pretty small. Due to space limitation, we omit the variance here. B EXPERIMENTS DETAILS B.1 EXPERTS’ EXPERTISE For each information structure in Figure 1, we generate two groups of crowdsourced labels for each dataset: labels provided by (H) experts with relatively high expertise; (L) experts with relatively low expertise. For each of the situation (H) (L), all three cases have the same senior experts. Case B.1. (Independent mistakes) Ms senior experts are mutually conditionally independent. (H) Ms = 5. (L) Ms = 10. Dogs vs. Cats In situation (H), some senior experts are more familiar with cats, while others make better judgments on dogs. For example, expert A is more familiar with cats, her expertise for dogs/cats is 0.6/0.8 in the sense that if the ground truth is dog/cat, she labels the image as “dog”/“cat” with probability 0.6/0.8 respectively. Similarly, other experts expertise are B:0.6/0.6, C:0.9/0.6, D:0.7/0.7, E:0.6/0.7. In situation (L), all ten seniors’ expertise are 0.55/0.55. CIFAR-10 In situation (H), we generate experts who may make mistakes in distinguishing the hard pairs: cat/dog, deer/horse, airplane/bird, automobile/trunk, frog/ship, but can perfectly distinguish other easy pairs (e.g. cat/frog), which makes sense in practice. When they cannot distinguish the pair, some of them may label the pair randomly and some of them label the pair the same class. In detail, for each hard pair, expert A label the pair the same class (e.g. A always labels the image as “cat” when the image has cats or dogs), expert B labels the pair uniformly at random (e.g. B labels the image as “cat” with the probability 0.5 and “dog” with the probability 0.5 when the image has cats or dogs). Expert C is familiar with mammals so she can distinguish cat/dog and deer/hose, while for other hard pairs, she label each of them uniformly at random. Expert D is familiar with vehicles so she can distinguish airplane/bird, automobile/trunk and frog/ship, while for other hard pairs, she always label each of them the same class. Expert E does not have special expertise. For each hard pair, Expert E labels them correctly with the probability 0.6. In situation (L), all ten senior experts label each image correctly with probability 0.2 and label each image as other false classes uniformly with probability 0.8 9 . LUNA16 In situation (H), some senior experts tend to label the image as “benign” while others tend to label the image as “malignant”. Their expertise for benign/malignant are: A: 0.6/0.9, B:0.7/0.7, C:0.9/0.6, D:0.6/0.7, E:0.7/0.6. In situation (L), all ten seniors’ expertise are 0.6/0.6. Case B.2. (Naive majority) Ms senior experts are mutually conditional independent, while other Mj junior experts label all data as the first class effortlessly. (H) Ms = 5, Mj = 5. (L) Ms = 10, Mj = 15. For Dogs vs. Cats, all junior experts label everything as “cat”. For CIFAR-10, all junior experts label everything as “airplane”. For LUNA16, all junior experts label everything as “benign”. Case B.3. (Correlated mistakes) Ms senior experts are mutually conditional independent, and each junior expert copies one of the senior experts.(H) Ms = 5, Mj = 5. (L) Ms = 10, Mj = 2. For Dogs vs. Cats, CIFAR-10 and LUNA16, in situation (H), two junior experts copy expert A’s labels and three junior experts copy expert C’s labels; in situation (L), one junior expert copies expert A’s labels and another junior expert copies expert C’s labels. B.2 IMPLEMENTATION DETAILS Networks For Dogs vs. Cats and LUNA16, we follow the four layers network in Rodrigues & Pereira (2017). We use Adam optimizer with learning rate 1.0 × 10−4 for both the data classifier and the crowds aggregator. Batch size is set to 16. For CIFAR-10, we use VGG-16 as the backbone. We use Adam optimizer with learning rate 1.0× 10−3 for the data classifier and 1.0× 10−4 for the crowds aggregator. Batch size is set to 64. For Labelme data, We apply the same setting of Rodrigues & Pereira (2017): we use pre-trained VGG-16 deep neural network and apply only one FC layer (with 128 units and ReLU activations) and one output layer on top, using 50% dropout. We use Adam optimizer with learning rate 1.0 × 10−4 for both the data classifier and the crowds aggregator. For our method MAX-MIG’s crowds aggregator, for Dogs vs. Cats and LUNA16, we set the bias b as logp and only tune p. For CIFAR-10 and Labelme data, we fix the prior distribution p to be the uniform distribution p0 and fix the bias b as logp0. Initialization For AggNet and our method Max-MIG, we initialize the parameters {Wm}m using the method in Raykar et al. (2010): Wmc,c′ = log N ∑ i=1 Q(yi = c)1(ymi = c′) N ∑ i=1 Q(yi = c) (2) where 1(ymi = c′) = 1 when ymi = c′ and 1(ymi = c′) = 0 when ymi ≠ c′ and N is the total number of datapoints. We average all crowdsourced labels to obtain Q(yi = c) ∶= 1M M ∑ m=1 1(ymi = c). For Crowd Layer method, we initialize the weight matrices using identity matrix on Dogs vs. Cats and LUNA as Rodrigues & Pereira (2017) suggest. However, this initialization method leads to pretty bad results on CIFAR-10. Thus, we use (2) for Crowd Layer on CIFAR-10, which is the best practice in our experiments. C f -MUTUAL INFORMATION C.1 f -DIVERGENCE AND FENCHEL’S DUALITY f -divergence (Ali & Silvey, 1966; Csiszár et al., 2004) f -divergence Df ∶ ∆Σ ×∆Σ ↦ R is a non-symmetric measure of the difference between distribution p ∈ ∆Σ and distribution q ∈ ∆Σ and is defined to be Df(p,q) = ∑ σ∈Σ p(σ)f(q(σ) p(σ)) where f ∶ R↦ R is a convex function and f(1) = 0. C.2 f -MUTUAL INFORMATION Given two random variables X,Y whose realization space are ΣX and ΣY , let UX,Y and VX,Y be two probability measures where UX,Y is the joint distribution of (X,Y ) and VX,Y is the product of the marginal distributions of X and Y . Formally, for every pair of (x, y) ∈ ΣX ×ΣY , UX,Y (X = x,Y = y) = Pr[X = x,Y = y] VX,Y (X = x,Y = y) = Pr[X = x]Pr[Y = y]. If UX,Y is very different from VX,Y , the mutual information between X and Y should be high since knowing X changes the belief for Y a lot. If UX,Y equals to VX,Y , the mutual information between X and Y should be zero since X is independent with Y . Intuitively, the “distance” between UX,Y and VX,Y represents the mutual information between them. Definition C.1 (f -mutual information (Kong & Schoenebeck, 2016)). The f -mutual information between X and Y is defined as MIf(X,Y ) =Df(UX,Y ,VX,Y ) where Df is f -divergence. f -mutual information is always non-negative. Kong & Schoenebeck (2016) show that if we measure the amount of information by f -mutual information, any “data processing” on either of the random variables will decrease the amount of information crossing them. With this property, Kong & Schoenebeck (2016) propose an information theoretic mechanism design framework using f -mutual information. Kong & Schoenebeck (2018) reduce the co-training problem to a mechanism design problem and extend the information theoretic framework in Kong & Schoenebeck (2016) to address the co-training problem. D PROOF OF THEOREM 3.4 This section provides the formal proofs to our main theorem. Definition D.1 (Confusion matrix). For each expert m, we define her confusion matrix as Cm = (Cmc,c′)c,c′ ∈ R∣C∣×∣C∣ where Cmc,c′ = P (Y m = c′∣Y = c). We denote the set of all possible classifiers by H∞ and the set of all possible aggregators by G∞. Lemma D.2. (Kong & Schoenebeck, 2018) With assumption 3.1, 3.3, (h∗, g∗,p∗) is a maximizer of max h∈H∞,g∈G∞,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proposition D.3. [Independent mistakes] With assumptions 3.1, 3.3, if the experts are mutually independent conditioning on Y , then g∗ ∈ GWA and g∗(y[M]) = g(y[M];{logCm}Mm=1, logp∗) for every y[M] ∈ CM . This implies that (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proof. We will show that when the experts are mutually conditionally independent, then g∗(y[M]) = g(y[M];{logCm}Mm=1, logp∗). This also implies that g∗ ∈ GWA. Based on the result of Lemma D.2, by assuming that h∗ ∈ HNN , we can see (h∗, g∗,p∗) is a maximizer of maxh∈HNN ,g∈GWA,p∈∆CMIGf(h, g,p) and the maximum is the f mutual information between X and Y [M]. Moreover, Lemma D.2 also implies that ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. For every c ∈ C, every y[M] ∈ CM , (log g∗(y[M]))c = logP (Y = c∣Y [M] = y[M]) = logP (Y [M] = y[M]∣Y = c)P (Y = c) − logP (Y [M] = y[M]) = M ∑ m=1 logP (Y m = ym∣Y = c) + logP (Y = c) − logP (Y [M] = y[M]) Thus, ( M ∑ m=1 logCm ⋅ e(y m) + logp∗)c = M ∑ m=1 logP (Y m = ym∣Y = c) + logP (Y = c) =(log g∗(y[M]))c + logP (Y [M] = y[M]) Then, (softmax(∑ m logCm ⋅ e(y m) + logp∗))c = e(log g ∗(y[M]))c+logP (Y [M]=y[M]) ∑c e(log g∗(y[M]))c+logP (Y [M]=y[M]) = e (log g∗(y[M]))c ∑c e(log g∗(y[M]))c =(g∗(y[M]))c (since g∗(y[M]) ∈ ∆C , ∑c g∗(y[M])c = 1) Thus, g∗(y[M]) = softmax(∑ m logCm ⋅ e(y m) + logp∗) = g(y[M];{logCm}Mm=1, logp∗). We restate our main theorem, Theorem 3.4, here with more details and prove it. Theorem 3.4 (General case). With assumption 3.1, 3.3, when there exists a subset of experts S ⊂ [M] such that the experts in S are mutually independent conditioning on Y and Y S is a sufficient statistic for Y , i.e. P (Y = y∣Y [M] = y[M]) = P (Y = y∣Y S = yS) for every y ∈ C, y[M] ∈ CM , then g∗ ∈ GWA and g∗(y[M]) = g(y[M];{W∗m}m, logp∗) for every y[M] ∈ CM where for every m ∈ S, W∗m = logCm, for every m ∉ S, W∗m = 02. This implies that (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proof. Like the proof for the above proposition, we need to show that g∗(y[M]) = g(y[M];{W∗m}m, logp∗). This also implies that g∗ ∈ GWA as well as the other results of the theorem. When Y S is a sufficient statistic for Y , we have g∗(y[M]) = g∗(yS). Proposition D.3 shows that g∗(yS) = g(yS ;{logCs}s∈S , logp∗). Thus, we have g∗(y[M]) = g∗(yS) = g(yS ;{logCs}s∈S , logp∗) = g(y[M];{W∗m}m, logp∗) where for every m ∈ S , W∗m = logCm, for every m ∉ S , W∗m = 0. 2We denote the matrix whose entries are all zero by 0. E THEORETICAL COMPARISONS WITH MLE Raykar et al. (2010) propose a maximum likelihood estimation (MLE) based method in the learning from crowds scenario. Raykar et al. (2010) use logistic regression and Aggnet(Albarqouni et al., 2016) extends it to combine with the deep learning model. In this section, we will theoretically show that these MLE based methods can handle the independent mistakes case but cannot handle even the simplest correlated mistakes case—only one expert reports meaningful information and all other experts always report the same meaningless information—which can be handled by our method. Therefore, in addition to the experimental results, theoretically, our method is still better than these MLE based methods. We first introduce these MLE based methods. Let Θ be the parameter that control the distribution over X and Y . Let Θm be the parameter that controls the distribution over Y m and Y . For each each x, y[M], P (Y [M] = y[M]∣X = x; Θ,{Θm}m) (3) =∑ y P (Y = y∣X = x; Θ)P (Y [M] = y[M]∣Y = y;{Θm}m) (conditioning on Y , X and Y [M] are independent) =∑ y P (Y = y∣X = x; Θ)ΠMm=1P (Y m = ym∣Y = y; Θm) (experts are mutually conditional independent.) The MLE based method seeks Θ and Θm that maximize N ∑ i=1 log∑ c P (Y = c∣X = xi; Θ)ΠMm=1P (Y mi = ymi ∣Y = c; Θm) To theoretically compare it with our method, we use our language to reinterpret the above MLE based method. We define T as the set of all ∣C∣ × ∣C∣ transition matrices with each row summing to 1. For each expert m, we define Wm ∈ T as a parameter that is associated with m. Given a set of data classifiers h ∈ H where h ∶ I ↦ ∆C , the MLE based method seeks h ∈ H and transition matrices W1,W2,⋯,WM ∈ T that maximize N ∑ i=1 log∑ c h(xi)cΠMm=1Wmc,ymi . The expectation of the above formula is EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Note that Raykar et al. (2010) set the data classifiers space H as all logistic regression classifiers and Albarqouni et al. (2016) extend this space to the neural network space. Proposition E.1 (MLE works for independent mistakes). If the experts are mutually independent conditioning on Y, then h∗ and C1,C2,⋯,CM are a maximizer of max h,W1,W2,⋯,Wm∈T EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Proof. EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h(x)cΠMm=1Wmc,ym =∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h(x)cΠMm=1Wmc,ym Since W1,W2,⋯,Wm ∈ T , thus, ∑ y[M]∈CM ∑ c∈C h(x)cΠMm=1Wmc,ym = 1 which means (∑c∈C h(x)cΠMm=1Wmc,ym)y[M] can be seen as a distribution over all possible y[M] ∈ CM . Moreover, for any two distribution vectors p and q, p ⋅ logq ≤ p ⋅ logp, thus ∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h(x)cΠMm=1Wmc,ym ≤∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) logP (Y [M] = y[M]∣X = x) =∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h∗(x)cΠMm=1Cmc,Ym (see equation (3)) Thus, the MLE based method handles the independent mistakes case. However, we will construct a counter example to show that it cannot handle a simple correlated mistakes case which can be handled by our method. Example E.2 (A simple correlated mistakes case). We assume there are only two classes C = {0,1} and the prior over Y is uniform, that is, P (Y = 0) = P (Y = 1) = 0.5. We also assume that X = Y . There are 101 experts and one of the experts, say her the first expert, fully knows Y and always reports Y 1 = Y . The second expert knows nothing and every time flips a random unbiased coin whose randomness is independent with X,Y . She reports Y 2 = 1 when she gets head and reports Y 2 = 0 otherwise. The rest of experts copy the second expert’s answer all the time, i.e. Y m = Y 2, for m ≥ 2. Note that our method can handle this simple correlated mistakes case and will give all useless experts weight zero based on Theorem 3.4. We define h0 as a data classifier such that h0(x)0 = h0(x)1 = 0.5. We will show this meaningless data classifier h0 has much higher likelihood than h∗, which shows that in this simple correlated mistakes case, the MLE based method will obtain meaningless results. We define a data classifier h’s maximal expected likelihood as max W1,W2,⋯,Wm∈T EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Theorem E.3 (MLE fails for correlated mistakes). In the scenario defined by Example E.2, the meaningless classifier h0’s maximal expected likelihood is at least log 0.5 and the Bayesian posterior classifier h∗’s maximal expected likelihood is 100 log 0.5 ≪ log 0.5. The above theorem implies that the MLE based method fails in Example E.2. Proof. For the Bayesian posterior classifier h∗, since X = Y = Y 1 and Y 2 = ⋯ = YM , then h∗(X = c) is an one-hot vector where the cth entry is 1 and everything is determined by the realizations of Y and Y 2. EX,Y [M] log∑ c h∗(X)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h∗(x)cΠMm=1Wmc,ym = ∑ c,y[M] P (X = c, Y [M] = y[M]) log∑ c h∗(c)cΠMm=1Wmc,ym =∑ c,c′ P (Y = c)P (Y 2 = c′) logW 1c,cΠMm=2Wmc,c′ (X = Y = Y 1, Y 2 = ⋯ = YM ) =∑ c P (Y = c) logW 1c,c + M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logWmc,c′ ≤ M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logWmc,c′ ≤ M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logP (Y 2 = c′) (Wm is a transition matrix and p ⋅ logq ≤ p ⋅ logp) =100 log 0.5 (Y 2 equals 0 with probability 0.5 and 1 with probability 0.5 as well) The maximal value is obtained by setting W1 as an identity matrix and setting W2 = ⋯ = WM as ( 0.5 0.5 0.5 0.5 ). Thus, the Bayesian posterior data classifier h∗’s maximal expected likelihood is 100 log 0.5. For the meaningless data classifier h0, EX,Y [M] log∑ c h0(X)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h0(x)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log 0.5∑ c ΠMm=1W m c,ym =∑ c,c′ P (Y = c)P (Y 2 = c′) log 0.5∑ c ΠMm=1W m c,c′ Note when we set every Wm as an identity matrix, the above formula equals log 0.5. Thus, the meaningless data classifier h0’s maximal expected likelihood is at least log 0.5.
1. What is the focus and contribution of the paper regarding learning from crowdsourced worker labels and actual data? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in its key aspect of information intersection assumption? 3. How does the paper compare to other relevant literature, such as "Learning From Noisy Singly-labeled Data"? 4. Do you have any concerns regarding the i.i.d. assumption across values of "i" and its relation to accommodating correlated mistakes? 5. How do recent papers on crowdsourcing that go beyond restricting workers to have a common confusion matrix relate to the submission? 6. Where is the later reference mentioned on page 5? 7. What are your thoughts on the sufficient statistic assumption in Theorem 3.4, especially with an example provided? 8. What are your suggestions for improving the experiments to make them more convincing and reasonable?
Review
Review EDIT: I thank the authors for providing all clarifications. I think this paper is a useful contribution. It will be of interest to the audience in the conference. Summary: This paper provides a method to jointly learn from crowdsourced worker labels and the actual data. The key claimed difference is that previous works on crowdsourced worker labels ignored the data. At a higher level, the algorithm comprises maximizing the mutual information gain between the worker labels and the output of a neural network (or more generally any ML model) on the data. Evaluation: I like the idea behind the algorithm. However there are several issues on which I ask the authors to provide some clarity. I will provide a formal "evaluation" after that. (For the moment, please ignore the "rating". I will provide one after the rebuttal.) (1) As the authors clarified, one key aspect of the "information intersection" assumption is that the crowdsourced labels are statistically independent from the data when conditioned on the ground truth. How strongly does this coincide with reality? Since the work is primary empirical, is there any evidence on this front? (2) In the abstract, introduction etc., what does it mean to say that the algorithm is an "early algorithm"? -- Thanks for the clarification. I would suggest using the term "first algorithm" in such cases. However, is this the first algorithm towards this goal? See point (3). (3) The submitted paper misses an extremely relevant piece of literature: "Learning From Noisy Singly-labeled Data" (arXiv:1712.04577). This paper also aims to solve the label + features problem together. How do the results of this paper compare to that of this submission? (4) "Model and assumptions" Is the i.i.d. assumption across the values of "i"? Then does that not violate the earlier claim of accommodating correlated mistakes? (5) Recent papers on crowdsourcing (such as Achieving budget-optimality with adaptive schemes in crowdsourcing arXiv:1602.03481 and A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness arXiv:1606.09632) go beyond restricting workers to have a common confusion matrix for all questions. In this respect, these are better aligned with the realistic scenario where the error in labeling may depend on the closeness to the decision boundary. How do these settings and algorithms relate to the submission? (6) Page 5: "Later we will show...." Later where? Please provide a reference. (7) Theorem 3.4, The assumption of existence of experts such that Y^S is a sufficient statistic for Y: For instance, suppose there are 10 experts who all have a 0.999 probability of correctness (assume symmetric confusion matrices) and there are 5 non-experts who have a 0.001 probability of correctness and even if we suppose all are mutually independent given the true label, then does this satisfy this sufficient statistic assumption? This appears to be a very strong assumption, but perhaps the authors have better intuition? (8) The experiments comprise only some simulations. The main point of experiments (particularly in the absence of any theoretical results) towards bolstering the paper is to ensure that the assumptions are at least somewhat reasonable. I believe there are several datasets collected from Amazon Mechanical Turk available online? Otherwise, would it be possible to run realistic experiments on some crowdsourcing platforms?
ICLR
Title Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds Abstract Eliciting labels from crowds is a potential way to obtain large labeled data. Despite a variety of methods developed for learning from crowds, a key challenge remains unsolved: learning from crowds without knowing the information structure among the crowds a priori, when some people of the crowds make highly correlated mistakes and some of them label effortlessly (e.g. randomly). We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. Max-MIG simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth. To the best of our knowledge, this is the first algorithm that solves the aforementioned challenge of learning from crowds. In addition to the theoretical validation, we also empirically show that our algorithm achieves the new state-of-the-art results in most settings, including the real-world data, and is the first algorithm that is robust to various information structures. 1 INTRODUCTION Lack of large labeled data is a notorious bottleneck of the data-driven-based machine learning paradigm. Crowdsourcing provides a potential solution to this challenge: eliciting labels from crowds. However, the elicited labels are usually very noisy, especially for some difficult tasks (e.g. age estimation, medical images annotation). In the crowdsourcing-learning scenario, two problems are raised: (i) how to aggregate and infer the ground truth from the imperfect crowdsourced labels? (ii) how to learn an accurate data classifier with the imperfect crowdsourced labels? One conventional solution to the two problems is aggregating the crowdsourced labels using majority vote and then learning a data classifier with the majority answer. However, this naive method will cause biased results when the task is difficult and the majority of the crowds label randomly or always label a particular class (say class 1) effortlessly. Another typical solution is aggregating the crowdsourced labels in a more clever way, like spectral method (Dalvi et al., 2013; Zhang et al., 2014), and then learning with the aggregated results. This method avoids the above flaw that the majority vote method has, as long as their randomnesses are ∗Equal Contribution. mutually independent. However, the spectral method requires that the experts’ labeling noise are mutually independent, which often does not hold in practice since some experts may make highly correlated mistakes (see Figure 2 for example). Moreover, the above solutions aim to train an accurate data classifier and do not provide a method that can employ both the data and the crowdsourced labels to forecast the ground truth. A common assumption in the learning from crowds literature is that conditioning on the ground truth, the crowdsourced labels and the data are independent, as shown in Figure 1 (a). Under this assumption, the crowdsourced labels correlate with the data due to and only due to the ground truth. Thus, this assumption tells us the ground truth is the “information intersection” between the crowdsourced labels and the data. This “information intersection” assumption does not restrict the information structure among the crowds i.e. this assumption still holds even if some people of the crowds make highly correlated mistakes. We present several possible information structures under the “information intersection” assumption in Figure 1 (b). The majority vote will lead to inaccurate results in all cases if the experts have different levels of expertise and will induce extremely biased results in case (2) when a large number of junior experts always label class 1. The approaches that require the experts to make independent mistakes will lead to biased results in case (3), when the experts make highly correlated mistakes In this paper, we propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. To the best of our knowledge, this is the first algorithm that is both theoretically and empirically robust to the situation where some experts make highly correlated mistakes and some experts label effortlessly, without knowing the information structure among the experts. Our algorithm simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. In addition, we propose a method to learn an accurate data-crowds forecaster that can employ both the data and the crowdsourced labels. At a high level, our algorithm trains a data classifier and a crowds aggregator simultaneously to maximize their “mutual information”. This process will find the “information intersection” between the data and crowdsourced labels i.e. the ground truth labels. The data-crowds forecaster can be easily constructed from the trained data classifier and the trained crowds aggregator. This algorithm allows the conditional dependency among the experts as long as the intersection assumption holds. We design the crowds aggregator as the “weighted average” of the experts. This simple “weighted average” form allows our algorithm to be both highly efficient in computing and theoretically robust to a large family of information structures (e.g. case (1), (2), (3) in Figure 1 (b)). Particularly, our algorithm works when there exists a subset of senior experts, whose identities are unknown, such that these senior experts have mutually independent labeling biases and it is sufficient to only use the seniors’ information to predict the ground truth label. For other junior experts, they are allowed to have any dependency structure among themselves or between them and the senior experts. 2 RELATED WORK A series of works consider the learning from crowds problem and mix the learning process and the aggregation process together. Raykar et al. (2010) reduce the learning from crowds problem to a maximum likelihood estimation (MLE) problem, and implement an EM algorithm to jointly learn the expertise of different experts and the parameters of a logistic regression classifier. Albarqouni et al. (2016) extend this method to combine with the deep learning model. Khetan et al. (2017) also reduce the learning problem to MLE and assume that the optimal classifier gives the ground truth labels and the experts make independent mistakes conditioning on the ground truth. Unlike our method, these MLE based algorithms are not robust to correlated mistakes. Recently, Guan et al. (2017) and Rodrigues & Pereira (2017) propose methods that model multiple experts individually and explicitly in a neural network. However, their works lack theoretical guarantees and are outperformed by our method in the experiments, especially in the naive majority case. Moreover, unlike our method, their methods cannot be used to employ both the data and the crowdsourced labels to forecast the ground truth. Several works focus on modeling the experts. Whitehill et al. (2009) model both expert competence and image difficulty, but did not consider expert bias. Welinder et al. (2010) model each expert as a multidimensional classifier in an abstract feature space and consider both the bias of the expert and the difficulty of the image. Rodrigues et al. (2014) model the crowds by a Gaussian process. Khetan & Oh (2016); Shah et al. (2016) consider the generalized Dawid-Skene model (Dawid & Skene, 1979) which involves the task difficulty. However, these works are still not robust to correlated mistakes. We model the crowds via the original Dawid-Skene model and do not consider the task difficulty, but we believe our Max-MIG framework can be incorporated with any model of the experts and allow correlated mistakes. Our method differs from the works that focus on inferring ground truth answers from the crowds’ reports and then learn the classifier with the inferred ground truth (e.g. (Dawid & Skene, 1979; Zhou et al., 2012; Liu et al., 2012; Karger et al., 2014; Zhang et al., 2014; Dalvi et al., 2013; Ratner et al., 2016)) since our method simultaneously infers the ground truth and learns the classifier. In addition, our method provides a data-crowds forecaster while those works do not. Our method is also closely related to co-training. Blum & Mitchell (1998) first propose the co-training framework: simultaneously training two classifiers to aggregate two views of data. Our method interprets joint learning from crowds as a co-training style problem. Most traditional co-training methods require weakly good classifier candidates (e.g. better than random guessing). We follow the general information theoretic framework proposed by Kong & Schoenebeck (2018) that does not have this requirement. However, Kong & Schoenebeck (2018) only provide theoretic framework and assume an extremely high model complexity without considering the over-fitting issue, which is a too strong assumption for practice. Our work apply this framework to the learning from crowds problem and provide the proper design for the model complexity as well as the experimental validations. 3 METHOD In this section, we formally define the problem, introduce our method, Max-MIG, and provide a theoretical validation for our method. Notations For every set A, we use ∆A to denote the set of all possible distributions over A. For every integer M , we use [M] to denote {1,2, . . . ,M}. For every matrix A = (Ai,j)i,j ∈ R+s×t, we define logA as a s × t matrix such that its the (i, j)th entry is log(Ai,j). Similarly for every vector v = (vi)i ∈ R+s, we define logv as a vector such that its the ith entry is log(vi). Problem statement There are N datapoints. Each datapoint x ∈ I (e.g. the CT scan of a lung nodule) is labeled by M experts y[M] ∶= {y1, y2, . . . , yM ∣ym ∈ C} (e.g. C = {benign,malignant}, 5 experts’ labels: {benign, malignant, benign, benign, benign}). The datapoint x and the crowdsourced labels y[M] are related to a ground truth y ∈ C (e.g. the pathological truth of the lung nodule). We are aiming to simultaneously train a data classifier h and a crowds aggregator g such that h ∶ I ↦∆C predicts the ground truth y based on the datapoint x ∈ I , and g ∶ CM →∆C aggregates M crowdsourced labels y[M] into a prediction for ground truth y. We also want to learn a data-crowds forecaster ζ ∶ I × CM ↦∆C that forecasts the ground truth y based on both the datapoint x ∈ I and the crowdsourced labels y[M] ∈ C. 3.1 MAX-MIG: AN INFORMATION THEORETIC APPROACH Figure 3 illustrates the overview idea of our method. Here we formally introduce the building blocks of our method. Data classifier h The data classifier h is a neural network with parametersΘ. Its input is a datapoint x and its output is a distribution over C. We denote the set of all such data classifers by HNN . Crowds aggregator g The crowds aggregator g is a “weighted average” function to aggregate crowdsourced labels with parameters {Wm ∈ R∣C∣×∣C∣}Mm=1 and b. Its input y[M] is the crowdsourced labels provided by M experts for a datapoint and its output is a distribution over C. By representing each ym ∈ y[M] as an one-hot vector e(ym) ∶= (0, . . . ,1, . . . ,0)⊺ ∈ {0,1}∣C∣ where only the ymth entry of e(y m) is 1, g(y[M];{Wm}Mm=1,b) = softmax( M ∑ m=1 Wm ⋅ e(y m) + b) where Wm ⋅ e(ym) is equivalent to pick the ymth column of matrix Wm, as shown in Figure 3. We denote the set of all such crowds aggregators by GWA. Data-crowds forecaster ζ Given a data classifier h, a crowds aggregator g and a distribution p = (pc)c ∈ ∆C over the classes, the data-crowds forecaster ζ, that forecasts the ground truth based on both the datapoint x and the crowdsourced labels y[M], is constructed by ζ(x, y[M];h, g,p) = Normalize((h(x)c ⋅ g(y [M])c pc )c) where Normalize(v) ∶= v∑c vc . f -mutual information gain MIGf f -mutual information gain MIGf measures the “mutual information” between two hypotheses, which is proposed by Kong & Schoenebeck (2018). Given N datapoints x1, x2, . . . , xN ∈ I where each datapoint xi is labeled by M crowdsourced labels i ) ∈ ∆C by “weighted average”. We tune the parameters of h and g simultaneously to maximize their f -mutual information gain. We will show the maximum is the f -mutual information (a natural extension of mutual information, see Appendix C) between the data and the crowdsourced labels. Step 2: aggregating the “information intersection”: after we obtain the best h, g,p that maximizes MIGf(h, g,p), we use them to construct a data-crowds forecaster ζ that forecasts ground truth based on both the datapoint and the crowdsourced labels. To calculate the f -mutual information gain, we reward them for the average “agreements” between their outputs for the same task, i.e. h(xi) and g(y[M]i ) , as shown by the black lines, and punish them for the average “agreements” between their outputs for the different tasks, i.e. h(xi) and g(y[M]j ) where i ≠ j, as shown by the grey lines. Intuitively, the reward encourages the data classifier to agree with the crowds aggregator, while the punishment avoids them naively agreeing with each other, that is, both of them map everything to (1,0, . . . ,0). The measurement of “agreement” depends on the selection of f . See formal definition for MIGf in (1). y1i , y 2 i , . . . , y M i ∈ C, the f -mutual information gain between h and g, associated with a hyperparameter p = (pc)c ∈ ∆C , is defined as the average “agreements” between h and g for the same task minus the average “agreements” between h and g for the different tasks, that is, MIGf({xi},{y[M]i };h, g,p) = 1 N ∑ i ∂f(∑ c∈C h(xi)c ⋅ g(y[M]i )c pc ) (1) − 1 N(N − 1)∑i≠j f⋆ ⎛ ⎝ ∂f(∑ c∈C h(xi)c ⋅ g(y[M]j )c pc ) ⎞ ⎠ where f is a convex function satisfying f(1) = 0 and f⋆ is the Fenchel duality of f . We can use Table 1 as reference for ∂f(⋅) and f⋆(∂f(⋅)). Since the parameters of h is Θ and the parameters of g is {Wm}Mm=1 and b, we naturally rewrite MIGf({xi},{y[M]i };h, g,p) as MIGf({xi},{y[M]i };Θ,{W m}Mm=1,b,p). We seek {Θ,{Wm}Mm=1,b,p} that maximizes MIGf . Later we will show that when the prior of the ground truth is p∗ (e.g. p∗ = (0.8,0.2) i.e. the ground truth is benign with probability 0.8 and malignant with probability 0.2 a priori), the best b and p are logp∗ and p∗ respectively. Thus, we can set b as logp and only tune p. When we have side information about the prior p∗, we can fix parameter p as p∗, and fix parameter b as logp∗. 3.2 THEORETICAL JUSTIFICATION This section provides a theoretical validation for Max-MIG, i.e., maximizing the f -mutual information gain over HNN and GWA finds the “information intersection” between the data and the crowdsourced labels. In Appendix E, we compare our method with the MLE method (Raykar et al., 2010) theoretically and show that unlike our method, MLE is not robust to the correlated mistakes case. Recall that we assume that conditioning on the ground truth, the data and the crowdsourced labels are mutually independent. Thus, we can naturally define the “information intersection” as a pair of data classifier and crowds aggregator h∗, g∗ such that they both fully use their input to forecast the ground truth. Kong & Schoenebeck (2018) shows that when we have infinite number of datapoints and maximize over all possible data classifiers and crowds aggregators, the “information intersection” will maximize MIGf(h, g) to the f -mutual information (Appendix C) between the data and the crowdsourced labels. However, in practice, with a finite number of datapoints, the data classifier and the crowds aggregator space should be not only sufficiently rich to contain the “information intersection” but also sufficiently simple to avoid over-fitting. Later, the experiment section will show that our picked HNN and GWA are sufficiently simple to avoid over-fitting. We assume the neural network space is sufficiently rich. It remains to show that our weighted average aggregator space GWA is sufficiently rich to contain g∗. Model and assumptions Each datapoint xi with crowdsourced labels provided by M experts y1i , ..., y M i are drawn i.i.d. from random variables X,Y 1, ..., YM . Assumption 3.1 (Co-training assumption). X and Y [M] are independent conditioning on Y . Note that we do not assume that the experts’ labels are conditionally mutually independent. We define p∗ ∈ ∆C as the prior for Y , i.e. p∗c = P (Y = c). Definition 3.2 (Information intersection). We define h∗, g∗ and ζ∗ such that h∗(x)c = P (Y = c∣X = x) g∗(y[M])c = P (Y = c∣Y [M] = y[M]). ζ∗(x, y[M])c = P (Y = c∣X = x,Y [M] = y[M]) We call them Bayesian posterior data classifier / crowds aggregator / data-crowds forecaster respectively. We call (h∗, g∗) the information intersection between the data and the crowdsourced labels. We also assume the neural network space is sufficiently rich to contain h∗. Assumption 3.3 (Richness of the neural networks). h∗ ∈HNN . Theorem 3.4. With assumptions 3.1, 3.3, when there exists a subset of experts S ⊂ [M] such that the experts in S are mutually independent conditioning on Y and Y S is a sufficient statistic for Y , i.e. P (Y = y∣Y [M] = y[M]) = P (Y = y∣Y S = yS) for every y ∈ C, y[M] ∈ CM , then (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f -mutual information between X and Y [M]. Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Our main theorem shows that if there exists a subset of senior experts such that these senior experts are mutually conditional independent and it is sufficient to only use the information from these senior experts, then Max-MIG finds the “information interstion”. Note that we do not need to know the identities of the senior experts. For other junior experts, we allow any dependency structure among them and between them and the senior experts. Moreover, this theorem also shows that our method handles the independent mistakes case where all experts can be seen as senior experts (Proposition D.3). To show our results, we need to show that GWA contains g∗, i.e. there exists proper weights such that g∗ can be represented as a weighted average. In the independent mistakes case, we can construct each expert’s weight using her confusion matrix. Thus, in this case, each expert’s weight represents her expertise. In the general case, we can construct each senior expert’s weight using her confusion matrix and make the junior experts’ weights zero. Due to space limitation, we defer the formal proofs to Appendix D. 4 EXPERIMENT In this section, we evaluate our method on image classification tasks with both synthesized crowdsourced labels in various of settings and real world data. Our method Max-MIG is compared with: Majority Vote, training the network with the major vote labels from all the experts; Crowd Layer, the method proposed by Rodrigues & Pereira (2017); Doctor Net, the method proposed by Guan et al. (2017) and AggNet, the method proposed by Albarqouni et al. (2016). Image datasets Three datasets are used in our experiments. The Dogs vs. Cats (Kaggle, 2013) dataset consists of 25,000 images from 2 classes, dogs and cats, which is split into a 12,500-image training set and a 12,500-image test set. The CIFAR-10 (Krizhevsky et al., 2014) dataset consists of 60,000 32 × 32 color images from 10 classes, which is split into a 50,000-image training set and a 10,000-image test set. The LUNA16 (Setio et al., 2016) dataset consists of 888 CT scans for lung nodule. We preprocessed the CT scans by generating 8106 50 × 50 gray-scale images, which is split into a 6484-image training set and a 1622-image testing set. LUNA16 is highly imbalanced dataset (85%, 15%). Synthesized crowdsourced labels in various of settings For each information structure in Figure 1, we generate two groups of crowdsourced labels for each dataset: labels provided by (H) experts with relatively high expertise; (L) experts with relatively low expertise. For each of the situation (H) (L), all three cases have the same senior experts. Case 4.1. (Independent mistakes) Ms senior experts are mutually conditionally independent. Case 4.2. (Naive majority) Ms senior experts are mutually conditional independent, while other Mj junior experts label all datapoints as the first class effortlessly. Case 4.3. (Correlated mistakes) Ms senior experts are mutually conditional independent, and each junior expert copies one of the senior experts. Real-world dataset The LabelMe data (Rodrigues & Pereira, 2017; Russell et al., 2008) consists of a total of 2688 images, where 1000 of them were used to obtain labels from multiple annotators from Amazon Mechanical Turk and the remaining 1688 images were using for evaluating the different approaches. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. Networks We follow the four layers network in Rodrigues & Pereira (2017) on Dogs vs. Cats and LUNA16 and use VGG-16 on CIFAR-10 for the backbone of the data classifier h. For Labelme data, we apply the same setting of Rodrigues & Pereira (2017): we use pre-trained VGG-16 deep neural network and apply only one FC layer (with 128 units and ReLU activations) and one output layer on top, using 50% dropout. We defer other implementation details to appendix B. Table 2: Accuracy on LabelMe (real-world crowdsourced labels) Method Majority Vote Crowd Layer Doctor Net AggNet Max-MIG Accuracy 80.41 ± 0.56 83.65 ± 0.50 80.56 ± 0.59 85.20 ± 0.26 86.42 ± 0.36 Figure 4: Results on Dogs vs. Cats, CIFAR-10, LUNA16. 4.1 RESULTS We train the data classifier h on the four datasets through our method1 and other related methods. The accuracy of the trained data classifiers on the test set are shown in Table 2 and Figure 4. We also show the accuracy of our data-crowd forecaster and on the test set and compare it with AggNet (Table 3). For the performances of the trained data classifiers, our method Max-MIG (red) almost outperform all other methods in every experiment. For the real-world dataset, LabelMe, we achieve the new state-of-the-art results. For the synthesized crowdsourced labels, the majority vote method (grey) fails in the naive majority situation. The AggNet has reasonably good performances when the experts are conditionally independent, including the naive majority case since naive expert is independent with everything, while it is outperformed by us a lot in the correlated mistakes case. This matches the theory in Appendix E: the AggNet is based on MLE and MLE fails in correlated mistakes case. The Doctor Net (green) and the Crowd Layer (blue) methods are not robust to the naive majority case. Our data-crowds forecaster (Table 3) performs better than our data classifier, which shows that our data-crowds forecaster actually takes advantage of the additional information, the crowdsourced labels, to give a better result. Like us, Aggnet also jointly trains the classifier and the aggregator, and can be used to train a data-crowds forecaster. We compared our data-crowds forecaster with Aggnet. 1The results of Max-MIG are based on KL divergence. The results for other divergences are similar. The results still match our theory. When there is no correlated mistakes, we outperform Aggnet or have very similar performances with it. When there are correlated mistakes, we outperform Aggnet a lot (e.g. +30%). Recall that in the experiments, for each of the situation (H) (L), all three cases have the same senior experts. Thus, all three cases’ crowdsourced labels have the same amount of information. The results show that Max-MIG has similar performances for all three cases for each of the situation (H) (L), which validates our theoretical result: Max-MIG finds the “information intersection” between the data and the crowdsourced labels. 5 CONCLUSION AND DISCUSSION We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. We provide theoretical validation to our approach and compare our approach experimentally with previous methods (Doctor net (Guan et al., 2017), Crowd layer (Rodrigues & Pereira, 2017), Aggnet (Albarqouni et al., 2016)) under several different information structures. Each of the previous methods is not robust to at least one information structure and our method is robust to all and almost outperform all other methods in every experiment. To the best of our knowledge, our approach is the first algorithm that is both theoretically and empirically robust to the situation where some people make highly correlated mistakes and some people label effortlessly, without knowing the information structure among the crowds. We also test our method on real-world data and achieve the new state-of-the-art result. Our current implementation of Max-MIG has several limitations. For example, we implement the aggregator using a simple linear model, which cannot handle the case when the senior experts are latent and cannot be linearly inferred from the junior experts. However, note that if the aggregator space is sufficiently rich, the Max-MIG approach is still able to handle any situation as long as the “information intersection” assumption holds. One potential future direction is designing more complicated but still trainable aggregator space. ACKNOWLEDGMENTS We would like to express our thanks for support from the following research grants NSFC-61625201 and 61527804. A DATA-CROWDS FORECASTER COMPARISON Here (dc) is the shorthand for data-crowds forecaster and (d) is the shorthand for data-classifier. We take the average of five times experiments and the variance is pretty small. Due to space limitation, we omit the variance here. B EXPERIMENTS DETAILS B.1 EXPERTS’ EXPERTISE For each information structure in Figure 1, we generate two groups of crowdsourced labels for each dataset: labels provided by (H) experts with relatively high expertise; (L) experts with relatively low expertise. For each of the situation (H) (L), all three cases have the same senior experts. Case B.1. (Independent mistakes) Ms senior experts are mutually conditionally independent. (H) Ms = 5. (L) Ms = 10. Dogs vs. Cats In situation (H), some senior experts are more familiar with cats, while others make better judgments on dogs. For example, expert A is more familiar with cats, her expertise for dogs/cats is 0.6/0.8 in the sense that if the ground truth is dog/cat, she labels the image as “dog”/“cat” with probability 0.6/0.8 respectively. Similarly, other experts expertise are B:0.6/0.6, C:0.9/0.6, D:0.7/0.7, E:0.6/0.7. In situation (L), all ten seniors’ expertise are 0.55/0.55. CIFAR-10 In situation (H), we generate experts who may make mistakes in distinguishing the hard pairs: cat/dog, deer/horse, airplane/bird, automobile/trunk, frog/ship, but can perfectly distinguish other easy pairs (e.g. cat/frog), which makes sense in practice. When they cannot distinguish the pair, some of them may label the pair randomly and some of them label the pair the same class. In detail, for each hard pair, expert A label the pair the same class (e.g. A always labels the image as “cat” when the image has cats or dogs), expert B labels the pair uniformly at random (e.g. B labels the image as “cat” with the probability 0.5 and “dog” with the probability 0.5 when the image has cats or dogs). Expert C is familiar with mammals so she can distinguish cat/dog and deer/hose, while for other hard pairs, she label each of them uniformly at random. Expert D is familiar with vehicles so she can distinguish airplane/bird, automobile/trunk and frog/ship, while for other hard pairs, she always label each of them the same class. Expert E does not have special expertise. For each hard pair, Expert E labels them correctly with the probability 0.6. In situation (L), all ten senior experts label each image correctly with probability 0.2 and label each image as other false classes uniformly with probability 0.8 9 . LUNA16 In situation (H), some senior experts tend to label the image as “benign” while others tend to label the image as “malignant”. Their expertise for benign/malignant are: A: 0.6/0.9, B:0.7/0.7, C:0.9/0.6, D:0.6/0.7, E:0.7/0.6. In situation (L), all ten seniors’ expertise are 0.6/0.6. Case B.2. (Naive majority) Ms senior experts are mutually conditional independent, while other Mj junior experts label all data as the first class effortlessly. (H) Ms = 5, Mj = 5. (L) Ms = 10, Mj = 15. For Dogs vs. Cats, all junior experts label everything as “cat”. For CIFAR-10, all junior experts label everything as “airplane”. For LUNA16, all junior experts label everything as “benign”. Case B.3. (Correlated mistakes) Ms senior experts are mutually conditional independent, and each junior expert copies one of the senior experts.(H) Ms = 5, Mj = 5. (L) Ms = 10, Mj = 2. For Dogs vs. Cats, CIFAR-10 and LUNA16, in situation (H), two junior experts copy expert A’s labels and three junior experts copy expert C’s labels; in situation (L), one junior expert copies expert A’s labels and another junior expert copies expert C’s labels. B.2 IMPLEMENTATION DETAILS Networks For Dogs vs. Cats and LUNA16, we follow the four layers network in Rodrigues & Pereira (2017). We use Adam optimizer with learning rate 1.0 × 10−4 for both the data classifier and the crowds aggregator. Batch size is set to 16. For CIFAR-10, we use VGG-16 as the backbone. We use Adam optimizer with learning rate 1.0× 10−3 for the data classifier and 1.0× 10−4 for the crowds aggregator. Batch size is set to 64. For Labelme data, We apply the same setting of Rodrigues & Pereira (2017): we use pre-trained VGG-16 deep neural network and apply only one FC layer (with 128 units and ReLU activations) and one output layer on top, using 50% dropout. We use Adam optimizer with learning rate 1.0 × 10−4 for both the data classifier and the crowds aggregator. For our method MAX-MIG’s crowds aggregator, for Dogs vs. Cats and LUNA16, we set the bias b as logp and only tune p. For CIFAR-10 and Labelme data, we fix the prior distribution p to be the uniform distribution p0 and fix the bias b as logp0. Initialization For AggNet and our method Max-MIG, we initialize the parameters {Wm}m using the method in Raykar et al. (2010): Wmc,c′ = log N ∑ i=1 Q(yi = c)1(ymi = c′) N ∑ i=1 Q(yi = c) (2) where 1(ymi = c′) = 1 when ymi = c′ and 1(ymi = c′) = 0 when ymi ≠ c′ and N is the total number of datapoints. We average all crowdsourced labels to obtain Q(yi = c) ∶= 1M M ∑ m=1 1(ymi = c). For Crowd Layer method, we initialize the weight matrices using identity matrix on Dogs vs. Cats and LUNA as Rodrigues & Pereira (2017) suggest. However, this initialization method leads to pretty bad results on CIFAR-10. Thus, we use (2) for Crowd Layer on CIFAR-10, which is the best practice in our experiments. C f -MUTUAL INFORMATION C.1 f -DIVERGENCE AND FENCHEL’S DUALITY f -divergence (Ali & Silvey, 1966; Csiszár et al., 2004) f -divergence Df ∶ ∆Σ ×∆Σ ↦ R is a non-symmetric measure of the difference between distribution p ∈ ∆Σ and distribution q ∈ ∆Σ and is defined to be Df(p,q) = ∑ σ∈Σ p(σ)f(q(σ) p(σ)) where f ∶ R↦ R is a convex function and f(1) = 0. C.2 f -MUTUAL INFORMATION Given two random variables X,Y whose realization space are ΣX and ΣY , let UX,Y and VX,Y be two probability measures where UX,Y is the joint distribution of (X,Y ) and VX,Y is the product of the marginal distributions of X and Y . Formally, for every pair of (x, y) ∈ ΣX ×ΣY , UX,Y (X = x,Y = y) = Pr[X = x,Y = y] VX,Y (X = x,Y = y) = Pr[X = x]Pr[Y = y]. If UX,Y is very different from VX,Y , the mutual information between X and Y should be high since knowing X changes the belief for Y a lot. If UX,Y equals to VX,Y , the mutual information between X and Y should be zero since X is independent with Y . Intuitively, the “distance” between UX,Y and VX,Y represents the mutual information between them. Definition C.1 (f -mutual information (Kong & Schoenebeck, 2016)). The f -mutual information between X and Y is defined as MIf(X,Y ) =Df(UX,Y ,VX,Y ) where Df is f -divergence. f -mutual information is always non-negative. Kong & Schoenebeck (2016) show that if we measure the amount of information by f -mutual information, any “data processing” on either of the random variables will decrease the amount of information crossing them. With this property, Kong & Schoenebeck (2016) propose an information theoretic mechanism design framework using f -mutual information. Kong & Schoenebeck (2018) reduce the co-training problem to a mechanism design problem and extend the information theoretic framework in Kong & Schoenebeck (2016) to address the co-training problem. D PROOF OF THEOREM 3.4 This section provides the formal proofs to our main theorem. Definition D.1 (Confusion matrix). For each expert m, we define her confusion matrix as Cm = (Cmc,c′)c,c′ ∈ R∣C∣×∣C∣ where Cmc,c′ = P (Y m = c′∣Y = c). We denote the set of all possible classifiers by H∞ and the set of all possible aggregators by G∞. Lemma D.2. (Kong & Schoenebeck, 2018) With assumption 3.1, 3.3, (h∗, g∗,p∗) is a maximizer of max h∈H∞,g∈G∞,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proposition D.3. [Independent mistakes] With assumptions 3.1, 3.3, if the experts are mutually independent conditioning on Y , then g∗ ∈ GWA and g∗(y[M]) = g(y[M];{logCm}Mm=1, logp∗) for every y[M] ∈ CM . This implies that (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proof. We will show that when the experts are mutually conditionally independent, then g∗(y[M]) = g(y[M];{logCm}Mm=1, logp∗). This also implies that g∗ ∈ GWA. Based on the result of Lemma D.2, by assuming that h∗ ∈ HNN , we can see (h∗, g∗,p∗) is a maximizer of maxh∈HNN ,g∈GWA,p∈∆CMIGf(h, g,p) and the maximum is the f mutual information between X and Y [M]. Moreover, Lemma D.2 also implies that ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. For every c ∈ C, every y[M] ∈ CM , (log g∗(y[M]))c = logP (Y = c∣Y [M] = y[M]) = logP (Y [M] = y[M]∣Y = c)P (Y = c) − logP (Y [M] = y[M]) = M ∑ m=1 logP (Y m = ym∣Y = c) + logP (Y = c) − logP (Y [M] = y[M]) Thus, ( M ∑ m=1 logCm ⋅ e(y m) + logp∗)c = M ∑ m=1 logP (Y m = ym∣Y = c) + logP (Y = c) =(log g∗(y[M]))c + logP (Y [M] = y[M]) Then, (softmax(∑ m logCm ⋅ e(y m) + logp∗))c = e(log g ∗(y[M]))c+logP (Y [M]=y[M]) ∑c e(log g∗(y[M]))c+logP (Y [M]=y[M]) = e (log g∗(y[M]))c ∑c e(log g∗(y[M]))c =(g∗(y[M]))c (since g∗(y[M]) ∈ ∆C , ∑c g∗(y[M])c = 1) Thus, g∗(y[M]) = softmax(∑ m logCm ⋅ e(y m) + logp∗) = g(y[M];{logCm}Mm=1, logp∗). We restate our main theorem, Theorem 3.4, here with more details and prove it. Theorem 3.4 (General case). With assumption 3.1, 3.3, when there exists a subset of experts S ⊂ [M] such that the experts in S are mutually independent conditioning on Y and Y S is a sufficient statistic for Y , i.e. P (Y = y∣Y [M] = y[M]) = P (Y = y∣Y S = yS) for every y ∈ C, y[M] ∈ CM , then g∗ ∈ GWA and g∗(y[M]) = g(y[M];{W∗m}m, logp∗) for every y[M] ∈ CM where for every m ∈ S, W∗m = logCm, for every m ∉ S, W∗m = 02. This implies that (h∗, g∗,p∗) is a maximizer of max h∈HNN ,g∈GWA,p∈∆C EX,Y [M]MIGf(h(X), g(Y [M]),p) and the maximum is the f mutual information between X and Y [M], MIf(X,Y [M]). Moreover, ζ∗(x, y[M]) = ζ(x, y[M];h∗, g∗,p∗) for every x, y[M]. Proof. Like the proof for the above proposition, we need to show that g∗(y[M]) = g(y[M];{W∗m}m, logp∗). This also implies that g∗ ∈ GWA as well as the other results of the theorem. When Y S is a sufficient statistic for Y , we have g∗(y[M]) = g∗(yS). Proposition D.3 shows that g∗(yS) = g(yS ;{logCs}s∈S , logp∗). Thus, we have g∗(y[M]) = g∗(yS) = g(yS ;{logCs}s∈S , logp∗) = g(y[M];{W∗m}m, logp∗) where for every m ∈ S , W∗m = logCm, for every m ∉ S , W∗m = 0. 2We denote the matrix whose entries are all zero by 0. E THEORETICAL COMPARISONS WITH MLE Raykar et al. (2010) propose a maximum likelihood estimation (MLE) based method in the learning from crowds scenario. Raykar et al. (2010) use logistic regression and Aggnet(Albarqouni et al., 2016) extends it to combine with the deep learning model. In this section, we will theoretically show that these MLE based methods can handle the independent mistakes case but cannot handle even the simplest correlated mistakes case—only one expert reports meaningful information and all other experts always report the same meaningless information—which can be handled by our method. Therefore, in addition to the experimental results, theoretically, our method is still better than these MLE based methods. We first introduce these MLE based methods. Let Θ be the parameter that control the distribution over X and Y . Let Θm be the parameter that controls the distribution over Y m and Y . For each each x, y[M], P (Y [M] = y[M]∣X = x; Θ,{Θm}m) (3) =∑ y P (Y = y∣X = x; Θ)P (Y [M] = y[M]∣Y = y;{Θm}m) (conditioning on Y , X and Y [M] are independent) =∑ y P (Y = y∣X = x; Θ)ΠMm=1P (Y m = ym∣Y = y; Θm) (experts are mutually conditional independent.) The MLE based method seeks Θ and Θm that maximize N ∑ i=1 log∑ c P (Y = c∣X = xi; Θ)ΠMm=1P (Y mi = ymi ∣Y = c; Θm) To theoretically compare it with our method, we use our language to reinterpret the above MLE based method. We define T as the set of all ∣C∣ × ∣C∣ transition matrices with each row summing to 1. For each expert m, we define Wm ∈ T as a parameter that is associated with m. Given a set of data classifiers h ∈ H where h ∶ I ↦ ∆C , the MLE based method seeks h ∈ H and transition matrices W1,W2,⋯,WM ∈ T that maximize N ∑ i=1 log∑ c h(xi)cΠMm=1Wmc,ymi . The expectation of the above formula is EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Note that Raykar et al. (2010) set the data classifiers space H as all logistic regression classifiers and Albarqouni et al. (2016) extend this space to the neural network space. Proposition E.1 (MLE works for independent mistakes). If the experts are mutually independent conditioning on Y, then h∗ and C1,C2,⋯,CM are a maximizer of max h,W1,W2,⋯,Wm∈T EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Proof. EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h(x)cΠMm=1Wmc,ym =∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h(x)cΠMm=1Wmc,ym Since W1,W2,⋯,Wm ∈ T , thus, ∑ y[M]∈CM ∑ c∈C h(x)cΠMm=1Wmc,ym = 1 which means (∑c∈C h(x)cΠMm=1Wmc,ym)y[M] can be seen as a distribution over all possible y[M] ∈ CM . Moreover, for any two distribution vectors p and q, p ⋅ logq ≤ p ⋅ logp, thus ∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h(x)cΠMm=1Wmc,ym ≤∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) logP (Y [M] = y[M]∣X = x) =∑ x P (X = x) ∑ y[M] P (Y [M] = y[M]∣X = x) log∑ c h∗(x)cΠMm=1Cmc,Ym (see equation (3)) Thus, the MLE based method handles the independent mistakes case. However, we will construct a counter example to show that it cannot handle a simple correlated mistakes case which can be handled by our method. Example E.2 (A simple correlated mistakes case). We assume there are only two classes C = {0,1} and the prior over Y is uniform, that is, P (Y = 0) = P (Y = 1) = 0.5. We also assume that X = Y . There are 101 experts and one of the experts, say her the first expert, fully knows Y and always reports Y 1 = Y . The second expert knows nothing and every time flips a random unbiased coin whose randomness is independent with X,Y . She reports Y 2 = 1 when she gets head and reports Y 2 = 0 otherwise. The rest of experts copy the second expert’s answer all the time, i.e. Y m = Y 2, for m ≥ 2. Note that our method can handle this simple correlated mistakes case and will give all useless experts weight zero based on Theorem 3.4. We define h0 as a data classifier such that h0(x)0 = h0(x)1 = 0.5. We will show this meaningless data classifier h0 has much higher likelihood than h∗, which shows that in this simple correlated mistakes case, the MLE based method will obtain meaningless results. We define a data classifier h’s maximal expected likelihood as max W1,W2,⋯,Wm∈T EX,Y [M] log∑ c h(X)cΠMm=1Wmc,Ym . Theorem E.3 (MLE fails for correlated mistakes). In the scenario defined by Example E.2, the meaningless classifier h0’s maximal expected likelihood is at least log 0.5 and the Bayesian posterior classifier h∗’s maximal expected likelihood is 100 log 0.5 ≪ log 0.5. The above theorem implies that the MLE based method fails in Example E.2. Proof. For the Bayesian posterior classifier h∗, since X = Y = Y 1 and Y 2 = ⋯ = YM , then h∗(X = c) is an one-hot vector where the cth entry is 1 and everything is determined by the realizations of Y and Y 2. EX,Y [M] log∑ c h∗(X)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h∗(x)cΠMm=1Wmc,ym = ∑ c,y[M] P (X = c, Y [M] = y[M]) log∑ c h∗(c)cΠMm=1Wmc,ym =∑ c,c′ P (Y = c)P (Y 2 = c′) logW 1c,cΠMm=2Wmc,c′ (X = Y = Y 1, Y 2 = ⋯ = YM ) =∑ c P (Y = c) logW 1c,c + M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logWmc,c′ ≤ M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logWmc,c′ ≤ M ∑ m=2 ∑ c P (Y = c)∑ c′ P (Y 2 = c′) logP (Y 2 = c′) (Wm is a transition matrix and p ⋅ logq ≤ p ⋅ logp) =100 log 0.5 (Y 2 equals 0 with probability 0.5 and 1 with probability 0.5 as well) The maximal value is obtained by setting W1 as an identity matrix and setting W2 = ⋯ = WM as ( 0.5 0.5 0.5 0.5 ). Thus, the Bayesian posterior data classifier h∗’s maximal expected likelihood is 100 log 0.5. For the meaningless data classifier h0, EX,Y [M] log∑ c h0(X)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log∑ c h0(x)cΠMm=1Wmc,ym = ∑ x,y[M] P (X = x,Y [M] = y[M]) log 0.5∑ c ΠMm=1W m c,ym =∑ c,c′ P (Y = c)P (Y 2 = c′) log 0.5∑ c ΠMm=1W m c,c′ Note when we set every Wm as an identity matrix, the above formula equals log 0.5. Thus, the meaningless data classifier h0’s maximal expected likelihood is at least log 0.5.
1. What is the main contribution of the paper, and what are the pros and cons of the proposed approach? 2. How does the reviewer assess the assumption on the existence of mutually independent senior experts in the labeling process? 3. What are the concerns regarding the hard-to-check assumption for Theorem 3.4 in real-world problems? 4. How would the reviewer suggest checking the sufficiency of senior expert's information to predict the true class label? 5. What are the thoughts on the experiment section and the performance of the proposed approach compared to other methods? 6. Would combining all experts in one setting and applying the proposed approach without prior knowledge of who are senior/junior experts be a better approach? 7. Were all experts required to label all data points or only a subset of training data points? 8. Are there any minor suggestions or comments regarding the paper's content or presentation?
Review
Review Top pros: - Well motivated approach with good examples from clinical setting - Sound proof on why information theoretical approach is better than MLE based approaches - Experiments on diversified data sets to show their approach's performance, with good implementation details. Top cons: - Fairly strong assumption on the existence of mutually independent senior experts in the labeling process - Hard-to-check assumption for Theorem 3.4 for real world problems, on the sufficiency of senior expert's info to predict the true class label The paper is in general well written, and builds upon existing work on crowdsourced data mining and co-training. I believe this line of work will benefit the community in taking a more information theoretical approach with relaxed assumptions on the data collection process. My main feedback is how to check the existence of senior experts in real-world applications. In particular, - If the labels are collected from an unknown setup (e.g. on AMT), where it is hard to establish the dependency structure of the experts, how can we use such approaches effectively? - Even if there exists a clear line between senior/junior experts in the labeling process, how do we know or check that the senior experts' opinion can sufficiently estimate the true labels? In the experiment section, the label data was collected with a build-in assumption of senior/junior labelers, and we also know exactly who are senior/junior experts. So it is not surprising that the proposed approach outperforms other approaches. It's also interesting to see that AggNet isn't that bad in general compared to the proposed approach (except on LUNA16). What if we combine all experts in one setting and apply the proposed approach without prior knowledge of who are senior/junior? Also, did you require all experts to label ALL the data points or only a subset of training data points? Minor points: - I don't believe "Naive majority" is an interesting setting - we can easily detect those junior experts that always label cases with one class, and remove these experts from the system, in practice. - I wouldn't call this an "early" algorithm as it indicates it's somewhat pre-mature. Just call this a novel approach that is in the early phase, and more sophisticated approach can be further developed.
ICLR
Title RMSprop converges with proper hyper-parameter Abstract Despite the existence of divergence examples, RMSprop remains one of the most popular algorithms in machine learning. Towards closing the gap between theory and practice, we prove that RMSprop converges with proper choice of hyperparameters under certain conditions. More specifically, we prove that when the hyper-parameter β2 is close enough to 1, RMSprop and its random shuffling version converge to a bounded region in general, and to critical points in the interpolation regime. It is worth mentioning that our results do not depend on “bounded gradient" assumption, which is often the key assumption utilized by existing theoretical work for Adam-type adaptive gradient method. Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSprop. Finally, based on our theory, we conjecture that in practice there is a critical threshold β∗ 2 , such that RMSprop generates reasonably good results only if 1 > β2 ≥ β∗ 2 . We provide empirical evidence for such a phase transition in our numerical experiments. 1 INTRODUCTION RMSprop (Tieleman & Hinton, 2012) remains one of the most popular algorithms for machine learning applications. As a non-momentum version of a more general algorithm Adam, RMSprop’s good empirical performance has been well acknowledged by practitioners in generative adversarial networks (GANs) (Seward et al., 2018; Yazıcı et al., 2019; Karnewar & Wang, 2020; JolicoeurMartineau, 2019), reinforcement learning (Mnih et al., 2016), etc. In spite of its prevalence, however, Reddi et al. (2018) discovered that RMSprop (as well as the more general version Adam) can diverge even for simple convex functions. To fix the algorithm, the authors of Reddi et al. (2018) proposed a new variant called AMSGrad, which is guaranteed to converge under certain conditions. Since then, it has been an active area of research to design provably convergent variants of RMSprop. These variants include AdaFom (Chen et al., 2019), Adabound (Luo et al., 2019), Nostalgic Adam (Huang et al., 2019), Yogi (Zaheer et al., 2018), and many more. Despite the variants, the vanilla RMSprop indeed works well in practice, and after proper hyper-parameter tuning, the non-convergence issue has not been commonly observed. Why is there a large gap between theory and practice? Is this because the real-world problems are likely to be “nice”, or is it because the theoretical analysis of RMSprop does not match how it is used in practice? With the above questions in mind, we revisited the counter-example of Reddi et al. (2018), and found an interesting phenomenon. One counter-example of Reddi et al. (2018) is the following: ft(x) = { Cx, for t mod C = 1 −x, otherwise (1) where x ∈ [−1, 1]. They proved the divergence under the condition β2 ≤ min{C− 4 C−2 , 1− ( 9 2C )2}, where β2 is the second order momentum coefficient in Algorithm 1 (the algorithm is presented later). For instance, when C = 10, then the algorithm diverges if β2 < 0.3. Reddi et al. (2018) mentioned ∗IOE, University of Michigan, [email protected]. Part of the work was done when Naichen Shi was working with Prof. Ruoyu Sun as an intern. †ISE, University of Illinois at Urbana-Champaign. [email protected] ‡ECE, University of Minnesota - Twin Cities, [email protected]. §University of Illinois at Urbana-Champaign. [email protected]. Corresponding author: Ruoyu Sun. that “this explains why large β2 is advisable while using Adam algorithm”, but they did not analyze whether large β2 leads to convergence in their example. We ran simulation for problem (1) with different β2 and found there is always a threshold of β2 above which RMSprop converges, see Figure 1. For instance, when C = 10, the transition point of β2 is roughly 0.955: the algorithm converges if β2 > 0.956 but diverges if β2 < 0.955. In general, there is a curve of phase transition from divergence to convergence, and such a curve slopes upward, which means the transition point is closer to 1 if C becomes larger. Based on this observation, we make the following conjecture: Conjecture: RMSprop converges if β2 is large enough. Before further discussion, we introduce the following assumption. Assumption 1.1. f(x) = ∑n−1 j=0 fj(x), and n−1∑ j=0 ‖∇fj (x)‖22 ≤ D1 ‖∇f (x)‖ 2 2 +D0. (2) We divide optimization problems into 2 classes: realizable problems where D0 = 0 and non-realizable problems where D0 > 0. When D0 = 0, the assumption (1.1) becomes∑n−1 j=0 ‖∇fj(x)‖ 2 2 ≤ D1 ‖∇f(x)‖ 2 2 , which is called “strong growth condition” (SGC) (Vaswani et al., 2019). It requires the norm of the stochastic gradient to be proportional to the batch gradient norm. When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. For linear regression problems, SGC holds if the linear model can fit all data. More specifically, for the problem minx ‖Ax‖2 = ∑n j=1 ( aTj x )2 where A is an n by n matrix and aTj is the j-th row vector of A, SGC holds with D1 ≤ λmax (∑n i=1 aia T i aia T i ) /λmin ( ATA ) (Raj & Bach, 2020). SGC can be viewed as a simple condition that models overparameterized neural networks capable of interpolating all data points (Vaswani et al., 2019). Therefore, in this work we use the terminology “realizable problems” to refer to the problems that satisfy SGC. 1.1 MAIN CONTRIBUTIONS In an attempt to resolve the conjecture, we delve into RMSprop’s convergence issues and obtain a series of theoretical and empirical results. Our contributions are summarized below: • We find that RMSprop’s convergence is contingent on the choice of β2. For general optimization problems, there are two types of hyper-parameters: problem-dependent hyperparameters such as step size in GD, and universal constants such as momentum coefficient in heavy ball method 1. Our result reveals that β2 is closer to the first type. • We prove that RMSprop converges to stationary point for realizable problems (interpolation regime), and to some bounded region for non-realizable problems. Combining with the divergence example of RMSprop, this indicates the existence of a phase transition from divergence to convergence dependent on β2. Note that when we say “convergence”, in a weak sense it means the sequence converges to a bounded region for non-realizable case; and in a strong sense it means the sequence converges to stationary points for realizable case. • To our best knowledge, we are the first to prove the convergence of RMSprop and some of Adam without any form of assumption about the boundedness of the gradient norm. This is important for showing the transition: with added assumptions on bounded gradients, the gradients cannot diverge, while the counter-example shows that the gradient can. 2 PRELIMINARIES We consider a finite-sum problem: min x∈Rd f(x) = n−1∑ j=0 fj(x). (3) In neural network training, fj usually represents the loss contributed by the j-th sample batch. We present randomly shuffled Adam in Algorithm 1. RMSProp is the special case of Adam with β1 = 0. In this work, we mainly focus on RMSprop; nevertheless, we will present a result for a special case of Adam with small β1. Algorithm 1 Randomly Shuffled Adam Initialize m1,−1 = 11−β1∇f(x0) and v1,−1 = 1 1−β2 maxj{∇fj(x0) ◦ ∇fj(x0)}. for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i vk,i = β2vk,i−1 + (1− β2)∇fτk,i ◦ ∇fτk,i xk,i+1 = xk,i − ηk∗n√vk,i+ ◦ml,k,i end for Break if certain stopping criterion is satisfied. xk+1,0 = xk,n, vk+1,−1 = vk,n−1, mk+1,−1 = mk,n−1 end for return x In Algorithm 1, x denotes the optimization variable, m denotes the first order momentum and v denotes the second order momentum. Specifically, we denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop and i-th inner loop, respectively. We denote∇fj as the gradient of fj and let ◦ be the component-wise multiplication. The division of two vectors is component-wise as well. Moreover, we denote ηt as the step-size and β1, β2 as the hyper-parameters in the algorithm. When n = 1, we obtain full batch Adam. We replaced the bias correction step in (Kingma & Ba, 2015) with a special initialization on m1,−1 and v1,−1. This initialization can also correct the bias, but has cleaner results. Since the effect of initialization or bias correction becomes more and more negligible as the training progresses, RMSprop with zero initialization or our initialization will have the same asymptotic behavior. We put our results for the original version of RMSprop in the appendix. 1Rigorously speaking, for the best convergence rate, the momentum coefficient should also be problemdependent; but just for achieving convergence, it can be problem independent. As for hyper-parameters, we choose ηt = η1√t and fix β2 to be a constant that is independent of the iteration count. We allow to be an arbitrary non-negative constant; in particular, our result holds even for = 0. The constant is added in practice for numerical stability, and is typically chosen to be 10−6 or even 10−8. It is much smaller than√vk,i (which is roughly the size of gradient norm). 2.1 RELATED WORK As discussed earlier, one line of research focuses on variants of RMSprop and Adam that can be proved to converge. These works usually modify the update rule of vt. For instance, AMSGrad (Reddi et al., 2018), AdaFom (Chen et al., 2019) explicitly make vt non-decreasing. Nostalgic Adam (Huang et al., 2019) and the algorithms analyzed in Zou et al. (2019) and Chen et al. (2019) use iteration-dependent β2t (and/or β1t) to let vt weigh more on past gradients. Some works add new modifications into RMSprop and Adam; for instance, Zhou et al. (2019) mitigate the bias in update direction by using a different estimate of vt, Dozat (2016) combine Adam with Nesterov momentum, and Liu et al. (2020a) employ a warm-up technique. Besides modifying the algorithm, a few attempts have been made to address the non-convergence issues of the original versions, but they often rely on extra assumptions. A number of works (Zaheer et al., 2018; De et al., 2019; Défossez et al., 2020) prove the convergence of Adam under these additional assumptions. One representative work along this line, Défossez et al. (2020), establishes a clean convergence result and also provides some insights on the momentum mechanisms by improving the dependence of the iteration complexity on 1− β1. However, these works assume to be relatively large compared to √vk,i. The issue is that such a choice essentially transforms RMSprop back to SGD since the effective step size is primarily controlled by , in lieu of √vk,i. This is in contrary to the spirit of RMSprop, which is to use adaptive step size to accelerate convergence. A few other works do not need the assumption of , but they have other assumptions. De et al. (2018) analyze deterministic and stochastic RMSprop, but they utilize a rather unrealistic assumption that the sign of all noisy gradients are the same, i.e., sign(∇fp(x)) = sign(∇fq(x)) for all p, q. Chen et al. (2019) describe a few quantities based on the iterates, and prove that if they grow in a certain speed as the iterates go, the algorithm converges. The drawback is that the condition cannot be checked a priori. Besides the assumptions mentioned above, all the aforementioned works require the gradient to be bounded. In general, removing boundedness assumptions (of any kind, including bounded gradient, bounded iterates, etc.) is not necessarily easy. Thus, such results are appreciated even for basic SGD. For instance, Bertsekas & Tsitsiklis (2000) presents a nice discussion of various results on inexact GD without involving conventional bounded assumptions, and claims “bounded-assumption-free” as one of the main contributions of their work. Very recently, we notice another work (Liu et al., 2020b) which removes the bounded gradient assumption for SGDM (SGD with momentum) and obtains satisfactory rates. Nevertheless, we are not aware of an existing result on RMSprop that does not require bounded gradient assumption. We will explain later why removing this bounded gradient assumption is particularly important for our paper. 3 THE raison d’être FOR β2 Figure 1 clearly demonstrates the important role of β2 in the convergence of RMSprop. Specifically, a sufficiently large β2 is critical for RMSprop’s convergence. Indeed, some recent works (Reddi et al., 2018; Zhou et al., 2019) have also made similar arguments, but they focus on understanding one part of the phenomenon, that is, small β2 leads to divergence. Our goal in this work is to complete the other part of the story by showing that, sufficiently large β2 guarantees convergence. The formal result will be provided in Sec. 4. To understand the function of β2, we first discuss why RMSprop diverges. It is known that the stochastic noise due to mini-batch will distort the gradient direction, leading to possible divergence, but in standard SGD, the distortion in multiple iterations is eliminated since the stochastic gradient is an unbiased estimate of the gradient. For RMSprop, at a given iteration the scaling constant 1/ √ v in the update direction may cause larger gradient distortion than the standard SGD. The distortion can be so significant that the average updating direction falls outside the dual cone of the true gradient. To illustrate this, consider the extreme case that β2 = 0 and = 0 (i.e., signSGD) and the special example (1). When applying signSGD to solve (1), in each epoch which consists of C iterations, one iteration will move x left followed by C − 1 iterations that move x right. Since all step sizes are the same in one epoch, the accumulated effect of one epoch makes x move in the ascending direction, instead of the descending direction. Then why does large β2 help? Intuitively, a large β2 can control the distortion on update directions. In the extreme case that β2 = 1 and = 0, RMSprop reduces to SGD where the distortion of multiple iterations can be mitigated, leading to convergence. We suspect that β2 does not need to be exactly 1, and a large β2 is enough to control the distortion. Our experiment in Figure 1 confirms that, at least for the counter-example of Reddi et al. (2018), there is an interval β2 ∈ [c, 1] such that RMSprop converges. What was initially not clear is whether the counter-example of Reddi et al. (2018) is a very special case or the convergence of large-β2-RMSprop holds for all problems. We found the real situation is somewhat more tricky. For non-realizable problems, we discovered an example for which RMSprop cannot converge to the minimum for a wide range of β2 < 1, but unlike the small-β2-case the iterates converge to a small ball around the minimum. This motivates us to distinguish three convergentsituations: divergence, convergence to a small region, convergence to critical points. What we can prove for the general problem is (see Theorem 4.3): for small β2, RMSprop can diverge; for large β2, RMSprop must converge to a small region whose size depends on β2. Then why do we observe the convergence to a single point in the experiment for (1)? We suspect this is because the problem (1) is realizable, and conjecture that the property of “convergence to critical points” holds for all realizable problems. We indeed prove this conjecture (see Corollary 4.1): large-β2-RMSprop converges to critical points if the problem satisfies SGC. We summarize our findings about the convergence properties of RMSprop in Table 1. Note that our results do not conflict with Theorem 3 in Reddi et al. (2018) which claims that “for any constant β1 and β2 there exists a divergent example” since here we choose β2 to be problemdependent, just like one chooses a step size < 2/L for GD where L is a problem dependent parameter. Another remark is that though β2 could be close to 1, RMSprop still retains the ability to adapt v to gradient square norm as long as β2 < 1, because new gradient signals are added for each iteration and the impact of previous signals decays exponentially. It is the adaptive ability that distinguishes RMSprop from SGD. Proving the theoretical advantage of RMSprop over SGD (i.e., choosing β2 < 1 is better than β2 = 1) is a very intriguing question; in general, the theoretical advantage of adaptive gradient methods (including RMSprop and AdaGrad) over SGD is a long standing question. in this work, we focus on the fundamental problem of convergence, instead of the more challenging question of justifying the advantage of RMSprop. 4 CONVERGENCE RESULTS In this section, we present the formal theoretical results. We start from the results of full batch RMSprop/Adam, and then present the results for the stochastic versions. Note that random shuffling is not a key factor, and the proof works for other settings. 4.1 FULL-BATCH VERSION We first consider the full-batch version of RMSprop. The following theorem shows that if we use all samples to evaluate the gradient, RMSProp with diminishing stepsize converges to critical points regardless of the choice of β2. Here we consider one popular step size schedule that ηt = η1√t . Theorem 4.1. (convergence of full-batch RMSprop) For problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch RMSprop (Alg. 1 with β1 = 0, = 0) with diminishing step size ηt = η1√t and any β2 ∈ (0, 1), we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) where T > 0 is the total iteration number. De et al. (2019) also proves the convergence of full batch RMSprop, but they require the gradient norm to be upper bounded; in contrast, we do not need this assumption, and only require lower-boundedness and L-smoothness of f . Our result suggests that the convergence property of batch-RMSprop is similar to signSGD, an algorithm that only uses the sign of gradient to calculate its descent direction (Bernstein et al., 2018): in the full-batch setting, signSGD (which can be called sign GD) has also been proved to converge without bounded gradient assumption. Below, we also derive an analogous result for full-batch Adam with only one additional constraint β1 < √ β2 < 1, which is often satisfied in practice: Theorem 4.2. (convergence of full-batch Adam) For optimization problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch Adam with diminishing step size ηt = η1√t and any β1 < √ β2 < 1, we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) . 4.2 STOCHASTIC VERSIONS As mentioned earlier, our simulation shows that RMSprop may not converge to critical points for non-realizable problems (an example is provided in the appendix). Nevertheless, we can still show randomly shuffled large-β2-RMSprop converges to a bounded region: Theorem 4.3. (large-β2 RMSprop converge to a region) For problem (3), assume f is lower-bounded by f∗ and all ∇fj is L-Lipschitz continuous. Furthermore, assume (2) holds, and β2 satisfies T2 (β2) , √ 10dn βn2 dnD1 (1− β2) ( 4n2 βn2 − 1 ) 2 + ( 1√ βn2 − 1 ) ≤ √2− 1 2 √ 2 , (4) Then, for randomly shuffled RMSprop with ηt = η1√t , we have min t∈(1,T ] min{‖∇fnt‖1 , ‖∇fnt‖ 2 2 √ D1d D0 } ≤ O ( log T√ T ) +O ( Q3,3 √ D0 ) , ∀ T ≥ 4. Here Q3,3 > 0 is a β2-dependent constant that goes to zero in the limit as β2 → 1. Remark 1. This result and the result in Reddi et al. (2018) together distinguish large-β2-RMSprop and small-β2-RMSprop: the former converges to a bounded region, while the latter can diverge. Note that there is a gap between the lower bound of β2 and the upper bound of β2 in the counter-example. We do not try to provide tight bounds on the threshold of β2 , as our main goal is to show a qualitative difference between large-β2-RMSprop and small-β2-RMSprop. Remark 2: Condition (4) in Theorem 4.3 implies that 1− β2 ≤ O ( n−3.5 ) . In the appendix we introduce three problem-dependent parameters ρ1 ∈ [1, √ n], ρ2 ∈ [0, n], and ρ3 ∈ [1, √ n] in equations(14), (15) and (16), and improve the sufficient condition (4) to 1− β2 ≥ O (1/ (nρ1ρ2ρ3)). For the worst case, the bound is O ( n−3.5 ) , just like condition (4) in Theorem 4.3. In actual training process, ρ1, ρ2, and ρ3 may not reach their upper bounds, thus the threshold of β2 can be lower in practice (see Appendix A.5 for some empirical estimate of ρi’s). The dependence on the number of batches n suggests that as n increases, the required hyper-parameter β2 should be larger. This is understandable since more minibatches means larger noise in the stochastic gradient, and thus larger β2 is required. There is a gap between our theoretical bound of β2 and the empirical transition point, and it is an interesting future question to close this gap. Remark 3. We point out three possible algorithm behaviors: divergence to infinity (or divergence for short), convergence to a bounded region (or non-divergence for short) and convergence to critical points. We distinguish the three cases, making it easier to explain the qualitative difference of small-β2 and large-β2 regime. For non-realizable cases, the phase transition is from divergence to non-divergence. Therefore, it is important to discard the bounded-gradient assumption: this assumption eliminates the possibility of divergence of gradients a priori. To be clear, there are actually two sub-cases of non-divergence: iterates can stay in a bounded but huge region (bad case), or iterates stay in a bounded region dependent on some parameters (good case). Indeed, the “convergence” of constant-stepsize SGD is in the sense of “converging to a region with size proportional to the noise variance”. Our result of “converging to bounded region” is also meaningful as the size of the region goes to zero as the noise variance goes to 0 or D0 goes to 0 (realizable case). Note that “divergence” can be also interpreted as “not converging to critical points” which is the notion used in Reddi et al. (2018), instead of “diverging to infinity”. We use the latter concept of “diverging to infinity” for the term “divergence”, because “not converging to critical points” can include the good case of converging to a small region around critical points (like constant-stepsize SGD). In the example of Reddi et al. (2018), a constrained problem is considered (bound constraint [-1,1]), thus divergence to infinity cannot happen. We add an example where the iterates and the gradients can diverge to infinity for small β2; see Appendix A.2. As a corollary of Theorem 4.3, if the optimization problem satisfies SGC (i.e. D0 = 0), RMSprop converges to critical points. Corollary 4.1. Suppose the assumptions of Theorem 4.3 holds. Further, assume (2) holds with D0 = 0, i.e., ∑n−1 j=0 ‖∇fj (x)‖ 2 2 ≤ D1 ‖∇f (x)‖ 2 2 for all x. we have: min t∈(1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) , ∀ T ≥ 4. With the above corollary, the numerical result in Figure 1 should not be surprising: problem (1) satisfies the strong growth condition, and thus there is always a range of β2 inside which RMSprop converges. We just need to tune β2 larger. We can prove similar convergence results for Adam with small β1 and large β2. Theorem 4.4. For optimization problem (3), assume that f is lower-bounded by f∗ and fj is gradient Lipschitz continuous with constant L for all j. Furthermore, assume that fj satisfies (2) for all x. Then, for randomly shuffled Adam with diminishing step size ηt = η1√t and β1, β2 satisfying T1 (β1, β2)+T2 (β2) < 1− 1√2 , we have mint∈[1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) +O ( Q3,5 √ D0 ) ∀ T ≥ 4, where Q3,5 is a constant that approaches 0 in the limit T1 + T2 → 0, T2 is defined in (4), and T1 is defined as T1 (β1, β2) = √ 5dn βn2 dn2D1 β1 βn2 ( 1−β1 1−βn1 + 1 ) . Remark: This result shows that controlling β2 and β1 together can ensure convergence of Adam. We conjecture that the same convergence can be proved for a large range of β1, but we are not able to prove that for now (which is why we focus on RMSprop in this work) and leave it to future work. 5 EXPERIMENTS We conduct experiments of image classification and GAN on MNIST and CIFAR-10 to support our theoretical findings. The details of GAN experiments are in the appendix, and in this section we focus on image classification results. We visualize the optimization trajectory when training on MNIST for small β2 = 0.8 and large β2 = 0.99 in Figure 2. We observe different behaviors: while the trajectory of β2 = 0.8 moves away from the bottom of the basin, for larger β2 the trajectory stays in the level set and has decreasing loss values. In the CIFAR experiments, we use ResNet-18. We choose β2 = 0.8, 0.9, 0.95, 0.99 respectively. With different batch sizes 8, 16, 32, we run each algorithm for 100 epochs without explicit regularization. Table 2 shows two phenomena: first, for fixed batch size, there is a transition point of β2 above which the accuracy suddenly jumps; second, the transition point is closer to 1 as the batch size decreases. More specifically, for batch size 8, the transition point lies in [0.95, 0.99]: the average training accuracy is 44.53% for β2 = 0.95, but jumps to 99.74% for for β2 = 0.99. For batch size 16, the transition point lies in [0.9, 0.95]: the average training accuracy is 67.27% for β2 = 0.9, but jumps to 96.38% for β2 = 0.95. For batch size 16, the transition point lies in [0.8, 0.9]. As batch size increases from 8 to 16 and then 32, then transition point decreases from 0.99 to 0.95 and then to 0.9. These two phenomena are consistent with our theory. The first phenomenon can be explained by Theorem 4.3 and Corollary 4.1 which state that for large enough β2, RMSprop converges. The second phenomenon can be explained by Theorem 4.3 as well: as explained in Remark 2, the required β2 decreases as the number of mini-batches n decreases, i.e., the batch size increases. Next, we demonstrate that the convergence speed of SGD is much slower than Adam under the same experiment setting as Table 2. We compare the average training and test accuracy at the 10-th epoch. As Table 3 shows, the accuracy of Adam is much higher than SGD at the 10-th epoch. All codes generating experimental results are available on the Github repository https://github. com/soundsinteresting/RMSprop 6 CONCLUSION In this work, we study the convergence behavior of RMSprop by taking a closer look at the hyperparameters. Specifically, for realizable problems, we provide a data-dependent threshold of β2 above which we prove the convergence of randomly shuffled RMSprop and small β1 Adam without bounded gradient assumption. We also show that RMSprop converge into a bounded region under non-realizable settings. These findings reveal that there is a critical threshold of β2 regarding the convergence behavior of RMSprop, and the phase transition is supported by the numerical experiments. Our results provide basic guidelines for tuning hyper-parameters in practice. 7 ACKNOWLEDGEMENT M. Hong is supported by NSF grant CMMI-1727757. Ruichen Li from Peking University helped check some proof of Theorem 4.3. We thank all anonymous reviewers for their feedback. We also want to thank Eduard Gorbunov and Juntang Zhuang for pointing out some mistakes on openreview in the earlier versions.
1. What is the focus of the paper, and what are the key contributions? 2. How does the paper revise the famous counterexample on the convergence of Adam? 3. What are the differences between the authors' approach and previous works on the topic? 4. How does the paper clarify the misleading claim about Adam's convergence? 5. What are some suggestions for improving the paper, such as citing relevant works and improving figure quality?
Review
Review This work revisits a famous counterexample on the convergence of Adam (originally presented in Reddi 2018). The authors show that, if the EMA parameter beta2 in RMSprop and Adam is chosen high enough, then both methods converge to a bounded region in the stochastic setting. In addition, the authors provide some results for the full-batch case. Crucially, and differently from many other papers on the topic, the gradients are not assumed to be bounded and the beta2 hyperparameter is not chosen to increase to 1. The paper is well written and the logic of it is convincing. I like the introduction and Figure 1 (this nicely illustrates the relevance of this paper). It is also very well organized. Unfortunately, I did not have the time I wish I had to dig into the proofs (just had a quick check), but the methodology of the authors and the results are convincing. This is overall a very nice paper, with clean and easy to read results, that clarifies an important point: it is misleading to claim that “Adam does not converge” (which was pointed out in Reddi 2018 to introduce AMSgrad). I have heard this (wrong) claim many times in the optimization community – hence I think this paper deserves attention (therefore my clear accept). This work truly does merge the gap between theory and practice in non-convex stochastic optimization. Just a few suggestions: I think the authors should cite and discuss the results in Defossez et al. 2019 (On the convergence of Adam and Adagrad). Also, I think Figure 1 deserves better quality. It's done in matlab so in the xlabel command you can put 'interpreter','latex' and 'fontsize',20. Finally, I spotted 1 typo: in Remark2 “cases of non-divergence cases”.
ICLR
Title RMSprop converges with proper hyper-parameter Abstract Despite the existence of divergence examples, RMSprop remains one of the most popular algorithms in machine learning. Towards closing the gap between theory and practice, we prove that RMSprop converges with proper choice of hyperparameters under certain conditions. More specifically, we prove that when the hyper-parameter β2 is close enough to 1, RMSprop and its random shuffling version converge to a bounded region in general, and to critical points in the interpolation regime. It is worth mentioning that our results do not depend on “bounded gradient" assumption, which is often the key assumption utilized by existing theoretical work for Adam-type adaptive gradient method. Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSprop. Finally, based on our theory, we conjecture that in practice there is a critical threshold β∗ 2 , such that RMSprop generates reasonably good results only if 1 > β2 ≥ β∗ 2 . We provide empirical evidence for such a phase transition in our numerical experiments. 1 INTRODUCTION RMSprop (Tieleman & Hinton, 2012) remains one of the most popular algorithms for machine learning applications. As a non-momentum version of a more general algorithm Adam, RMSprop’s good empirical performance has been well acknowledged by practitioners in generative adversarial networks (GANs) (Seward et al., 2018; Yazıcı et al., 2019; Karnewar & Wang, 2020; JolicoeurMartineau, 2019), reinforcement learning (Mnih et al., 2016), etc. In spite of its prevalence, however, Reddi et al. (2018) discovered that RMSprop (as well as the more general version Adam) can diverge even for simple convex functions. To fix the algorithm, the authors of Reddi et al. (2018) proposed a new variant called AMSGrad, which is guaranteed to converge under certain conditions. Since then, it has been an active area of research to design provably convergent variants of RMSprop. These variants include AdaFom (Chen et al., 2019), Adabound (Luo et al., 2019), Nostalgic Adam (Huang et al., 2019), Yogi (Zaheer et al., 2018), and many more. Despite the variants, the vanilla RMSprop indeed works well in practice, and after proper hyper-parameter tuning, the non-convergence issue has not been commonly observed. Why is there a large gap between theory and practice? Is this because the real-world problems are likely to be “nice”, or is it because the theoretical analysis of RMSprop does not match how it is used in practice? With the above questions in mind, we revisited the counter-example of Reddi et al. (2018), and found an interesting phenomenon. One counter-example of Reddi et al. (2018) is the following: ft(x) = { Cx, for t mod C = 1 −x, otherwise (1) where x ∈ [−1, 1]. They proved the divergence under the condition β2 ≤ min{C− 4 C−2 , 1− ( 9 2C )2}, where β2 is the second order momentum coefficient in Algorithm 1 (the algorithm is presented later). For instance, when C = 10, then the algorithm diverges if β2 < 0.3. Reddi et al. (2018) mentioned ∗IOE, University of Michigan, [email protected]. Part of the work was done when Naichen Shi was working with Prof. Ruoyu Sun as an intern. †ISE, University of Illinois at Urbana-Champaign. [email protected] ‡ECE, University of Minnesota - Twin Cities, [email protected]. §University of Illinois at Urbana-Champaign. [email protected]. Corresponding author: Ruoyu Sun. that “this explains why large β2 is advisable while using Adam algorithm”, but they did not analyze whether large β2 leads to convergence in their example. We ran simulation for problem (1) with different β2 and found there is always a threshold of β2 above which RMSprop converges, see Figure 1. For instance, when C = 10, the transition point of β2 is roughly 0.955: the algorithm converges if β2 > 0.956 but diverges if β2 < 0.955. In general, there is a curve of phase transition from divergence to convergence, and such a curve slopes upward, which means the transition point is closer to 1 if C becomes larger. Based on this observation, we make the following conjecture: Conjecture: RMSprop converges if β2 is large enough. Before further discussion, we introduce the following assumption. Assumption 1.1. f(x) = ∑n−1 j=0 fj(x), and n−1∑ j=0 ‖∇fj (x)‖22 ≤ D1 ‖∇f (x)‖ 2 2 +D0. (2) We divide optimization problems into 2 classes: realizable problems where D0 = 0 and non-realizable problems where D0 > 0. When D0 = 0, the assumption (1.1) becomes∑n−1 j=0 ‖∇fj(x)‖ 2 2 ≤ D1 ‖∇f(x)‖ 2 2 , which is called “strong growth condition” (SGC) (Vaswani et al., 2019). It requires the norm of the stochastic gradient to be proportional to the batch gradient norm. When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. For linear regression problems, SGC holds if the linear model can fit all data. More specifically, for the problem minx ‖Ax‖2 = ∑n j=1 ( aTj x )2 where A is an n by n matrix and aTj is the j-th row vector of A, SGC holds with D1 ≤ λmax (∑n i=1 aia T i aia T i ) /λmin ( ATA ) (Raj & Bach, 2020). SGC can be viewed as a simple condition that models overparameterized neural networks capable of interpolating all data points (Vaswani et al., 2019). Therefore, in this work we use the terminology “realizable problems” to refer to the problems that satisfy SGC. 1.1 MAIN CONTRIBUTIONS In an attempt to resolve the conjecture, we delve into RMSprop’s convergence issues and obtain a series of theoretical and empirical results. Our contributions are summarized below: • We find that RMSprop’s convergence is contingent on the choice of β2. For general optimization problems, there are two types of hyper-parameters: problem-dependent hyperparameters such as step size in GD, and universal constants such as momentum coefficient in heavy ball method 1. Our result reveals that β2 is closer to the first type. • We prove that RMSprop converges to stationary point for realizable problems (interpolation regime), and to some bounded region for non-realizable problems. Combining with the divergence example of RMSprop, this indicates the existence of a phase transition from divergence to convergence dependent on β2. Note that when we say “convergence”, in a weak sense it means the sequence converges to a bounded region for non-realizable case; and in a strong sense it means the sequence converges to stationary points for realizable case. • To our best knowledge, we are the first to prove the convergence of RMSprop and some of Adam without any form of assumption about the boundedness of the gradient norm. This is important for showing the transition: with added assumptions on bounded gradients, the gradients cannot diverge, while the counter-example shows that the gradient can. 2 PRELIMINARIES We consider a finite-sum problem: min x∈Rd f(x) = n−1∑ j=0 fj(x). (3) In neural network training, fj usually represents the loss contributed by the j-th sample batch. We present randomly shuffled Adam in Algorithm 1. RMSProp is the special case of Adam with β1 = 0. In this work, we mainly focus on RMSprop; nevertheless, we will present a result for a special case of Adam with small β1. Algorithm 1 Randomly Shuffled Adam Initialize m1,−1 = 11−β1∇f(x0) and v1,−1 = 1 1−β2 maxj{∇fj(x0) ◦ ∇fj(x0)}. for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i vk,i = β2vk,i−1 + (1− β2)∇fτk,i ◦ ∇fτk,i xk,i+1 = xk,i − ηk∗n√vk,i+ ◦ml,k,i end for Break if certain stopping criterion is satisfied. xk+1,0 = xk,n, vk+1,−1 = vk,n−1, mk+1,−1 = mk,n−1 end for return x In Algorithm 1, x denotes the optimization variable, m denotes the first order momentum and v denotes the second order momentum. Specifically, we denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop and i-th inner loop, respectively. We denote∇fj as the gradient of fj and let ◦ be the component-wise multiplication. The division of two vectors is component-wise as well. Moreover, we denote ηt as the step-size and β1, β2 as the hyper-parameters in the algorithm. When n = 1, we obtain full batch Adam. We replaced the bias correction step in (Kingma & Ba, 2015) with a special initialization on m1,−1 and v1,−1. This initialization can also correct the bias, but has cleaner results. Since the effect of initialization or bias correction becomes more and more negligible as the training progresses, RMSprop with zero initialization or our initialization will have the same asymptotic behavior. We put our results for the original version of RMSprop in the appendix. 1Rigorously speaking, for the best convergence rate, the momentum coefficient should also be problemdependent; but just for achieving convergence, it can be problem independent. As for hyper-parameters, we choose ηt = η1√t and fix β2 to be a constant that is independent of the iteration count. We allow to be an arbitrary non-negative constant; in particular, our result holds even for = 0. The constant is added in practice for numerical stability, and is typically chosen to be 10−6 or even 10−8. It is much smaller than√vk,i (which is roughly the size of gradient norm). 2.1 RELATED WORK As discussed earlier, one line of research focuses on variants of RMSprop and Adam that can be proved to converge. These works usually modify the update rule of vt. For instance, AMSGrad (Reddi et al., 2018), AdaFom (Chen et al., 2019) explicitly make vt non-decreasing. Nostalgic Adam (Huang et al., 2019) and the algorithms analyzed in Zou et al. (2019) and Chen et al. (2019) use iteration-dependent β2t (and/or β1t) to let vt weigh more on past gradients. Some works add new modifications into RMSprop and Adam; for instance, Zhou et al. (2019) mitigate the bias in update direction by using a different estimate of vt, Dozat (2016) combine Adam with Nesterov momentum, and Liu et al. (2020a) employ a warm-up technique. Besides modifying the algorithm, a few attempts have been made to address the non-convergence issues of the original versions, but they often rely on extra assumptions. A number of works (Zaheer et al., 2018; De et al., 2019; Défossez et al., 2020) prove the convergence of Adam under these additional assumptions. One representative work along this line, Défossez et al. (2020), establishes a clean convergence result and also provides some insights on the momentum mechanisms by improving the dependence of the iteration complexity on 1− β1. However, these works assume to be relatively large compared to √vk,i. The issue is that such a choice essentially transforms RMSprop back to SGD since the effective step size is primarily controlled by , in lieu of √vk,i. This is in contrary to the spirit of RMSprop, which is to use adaptive step size to accelerate convergence. A few other works do not need the assumption of , but they have other assumptions. De et al. (2018) analyze deterministic and stochastic RMSprop, but they utilize a rather unrealistic assumption that the sign of all noisy gradients are the same, i.e., sign(∇fp(x)) = sign(∇fq(x)) for all p, q. Chen et al. (2019) describe a few quantities based on the iterates, and prove that if they grow in a certain speed as the iterates go, the algorithm converges. The drawback is that the condition cannot be checked a priori. Besides the assumptions mentioned above, all the aforementioned works require the gradient to be bounded. In general, removing boundedness assumptions (of any kind, including bounded gradient, bounded iterates, etc.) is not necessarily easy. Thus, such results are appreciated even for basic SGD. For instance, Bertsekas & Tsitsiklis (2000) presents a nice discussion of various results on inexact GD without involving conventional bounded assumptions, and claims “bounded-assumption-free” as one of the main contributions of their work. Very recently, we notice another work (Liu et al., 2020b) which removes the bounded gradient assumption for SGDM (SGD with momentum) and obtains satisfactory rates. Nevertheless, we are not aware of an existing result on RMSprop that does not require bounded gradient assumption. We will explain later why removing this bounded gradient assumption is particularly important for our paper. 3 THE raison d’être FOR β2 Figure 1 clearly demonstrates the important role of β2 in the convergence of RMSprop. Specifically, a sufficiently large β2 is critical for RMSprop’s convergence. Indeed, some recent works (Reddi et al., 2018; Zhou et al., 2019) have also made similar arguments, but they focus on understanding one part of the phenomenon, that is, small β2 leads to divergence. Our goal in this work is to complete the other part of the story by showing that, sufficiently large β2 guarantees convergence. The formal result will be provided in Sec. 4. To understand the function of β2, we first discuss why RMSprop diverges. It is known that the stochastic noise due to mini-batch will distort the gradient direction, leading to possible divergence, but in standard SGD, the distortion in multiple iterations is eliminated since the stochastic gradient is an unbiased estimate of the gradient. For RMSprop, at a given iteration the scaling constant 1/ √ v in the update direction may cause larger gradient distortion than the standard SGD. The distortion can be so significant that the average updating direction falls outside the dual cone of the true gradient. To illustrate this, consider the extreme case that β2 = 0 and = 0 (i.e., signSGD) and the special example (1). When applying signSGD to solve (1), in each epoch which consists of C iterations, one iteration will move x left followed by C − 1 iterations that move x right. Since all step sizes are the same in one epoch, the accumulated effect of one epoch makes x move in the ascending direction, instead of the descending direction. Then why does large β2 help? Intuitively, a large β2 can control the distortion on update directions. In the extreme case that β2 = 1 and = 0, RMSprop reduces to SGD where the distortion of multiple iterations can be mitigated, leading to convergence. We suspect that β2 does not need to be exactly 1, and a large β2 is enough to control the distortion. Our experiment in Figure 1 confirms that, at least for the counter-example of Reddi et al. (2018), there is an interval β2 ∈ [c, 1] such that RMSprop converges. What was initially not clear is whether the counter-example of Reddi et al. (2018) is a very special case or the convergence of large-β2-RMSprop holds for all problems. We found the real situation is somewhat more tricky. For non-realizable problems, we discovered an example for which RMSprop cannot converge to the minimum for a wide range of β2 < 1, but unlike the small-β2-case the iterates converge to a small ball around the minimum. This motivates us to distinguish three convergentsituations: divergence, convergence to a small region, convergence to critical points. What we can prove for the general problem is (see Theorem 4.3): for small β2, RMSprop can diverge; for large β2, RMSprop must converge to a small region whose size depends on β2. Then why do we observe the convergence to a single point in the experiment for (1)? We suspect this is because the problem (1) is realizable, and conjecture that the property of “convergence to critical points” holds for all realizable problems. We indeed prove this conjecture (see Corollary 4.1): large-β2-RMSprop converges to critical points if the problem satisfies SGC. We summarize our findings about the convergence properties of RMSprop in Table 1. Note that our results do not conflict with Theorem 3 in Reddi et al. (2018) which claims that “for any constant β1 and β2 there exists a divergent example” since here we choose β2 to be problemdependent, just like one chooses a step size < 2/L for GD where L is a problem dependent parameter. Another remark is that though β2 could be close to 1, RMSprop still retains the ability to adapt v to gradient square norm as long as β2 < 1, because new gradient signals are added for each iteration and the impact of previous signals decays exponentially. It is the adaptive ability that distinguishes RMSprop from SGD. Proving the theoretical advantage of RMSprop over SGD (i.e., choosing β2 < 1 is better than β2 = 1) is a very intriguing question; in general, the theoretical advantage of adaptive gradient methods (including RMSprop and AdaGrad) over SGD is a long standing question. in this work, we focus on the fundamental problem of convergence, instead of the more challenging question of justifying the advantage of RMSprop. 4 CONVERGENCE RESULTS In this section, we present the formal theoretical results. We start from the results of full batch RMSprop/Adam, and then present the results for the stochastic versions. Note that random shuffling is not a key factor, and the proof works for other settings. 4.1 FULL-BATCH VERSION We first consider the full-batch version of RMSprop. The following theorem shows that if we use all samples to evaluate the gradient, RMSProp with diminishing stepsize converges to critical points regardless of the choice of β2. Here we consider one popular step size schedule that ηt = η1√t . Theorem 4.1. (convergence of full-batch RMSprop) For problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch RMSprop (Alg. 1 with β1 = 0, = 0) with diminishing step size ηt = η1√t and any β2 ∈ (0, 1), we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) where T > 0 is the total iteration number. De et al. (2019) also proves the convergence of full batch RMSprop, but they require the gradient norm to be upper bounded; in contrast, we do not need this assumption, and only require lower-boundedness and L-smoothness of f . Our result suggests that the convergence property of batch-RMSprop is similar to signSGD, an algorithm that only uses the sign of gradient to calculate its descent direction (Bernstein et al., 2018): in the full-batch setting, signSGD (which can be called sign GD) has also been proved to converge without bounded gradient assumption. Below, we also derive an analogous result for full-batch Adam with only one additional constraint β1 < √ β2 < 1, which is often satisfied in practice: Theorem 4.2. (convergence of full-batch Adam) For optimization problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch Adam with diminishing step size ηt = η1√t and any β1 < √ β2 < 1, we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) . 4.2 STOCHASTIC VERSIONS As mentioned earlier, our simulation shows that RMSprop may not converge to critical points for non-realizable problems (an example is provided in the appendix). Nevertheless, we can still show randomly shuffled large-β2-RMSprop converges to a bounded region: Theorem 4.3. (large-β2 RMSprop converge to a region) For problem (3), assume f is lower-bounded by f∗ and all ∇fj is L-Lipschitz continuous. Furthermore, assume (2) holds, and β2 satisfies T2 (β2) , √ 10dn βn2 dnD1 (1− β2) ( 4n2 βn2 − 1 ) 2 + ( 1√ βn2 − 1 ) ≤ √2− 1 2 √ 2 , (4) Then, for randomly shuffled RMSprop with ηt = η1√t , we have min t∈(1,T ] min{‖∇fnt‖1 , ‖∇fnt‖ 2 2 √ D1d D0 } ≤ O ( log T√ T ) +O ( Q3,3 √ D0 ) , ∀ T ≥ 4. Here Q3,3 > 0 is a β2-dependent constant that goes to zero in the limit as β2 → 1. Remark 1. This result and the result in Reddi et al. (2018) together distinguish large-β2-RMSprop and small-β2-RMSprop: the former converges to a bounded region, while the latter can diverge. Note that there is a gap between the lower bound of β2 and the upper bound of β2 in the counter-example. We do not try to provide tight bounds on the threshold of β2 , as our main goal is to show a qualitative difference between large-β2-RMSprop and small-β2-RMSprop. Remark 2: Condition (4) in Theorem 4.3 implies that 1− β2 ≤ O ( n−3.5 ) . In the appendix we introduce three problem-dependent parameters ρ1 ∈ [1, √ n], ρ2 ∈ [0, n], and ρ3 ∈ [1, √ n] in equations(14), (15) and (16), and improve the sufficient condition (4) to 1− β2 ≥ O (1/ (nρ1ρ2ρ3)). For the worst case, the bound is O ( n−3.5 ) , just like condition (4) in Theorem 4.3. In actual training process, ρ1, ρ2, and ρ3 may not reach their upper bounds, thus the threshold of β2 can be lower in practice (see Appendix A.5 for some empirical estimate of ρi’s). The dependence on the number of batches n suggests that as n increases, the required hyper-parameter β2 should be larger. This is understandable since more minibatches means larger noise in the stochastic gradient, and thus larger β2 is required. There is a gap between our theoretical bound of β2 and the empirical transition point, and it is an interesting future question to close this gap. Remark 3. We point out three possible algorithm behaviors: divergence to infinity (or divergence for short), convergence to a bounded region (or non-divergence for short) and convergence to critical points. We distinguish the three cases, making it easier to explain the qualitative difference of small-β2 and large-β2 regime. For non-realizable cases, the phase transition is from divergence to non-divergence. Therefore, it is important to discard the bounded-gradient assumption: this assumption eliminates the possibility of divergence of gradients a priori. To be clear, there are actually two sub-cases of non-divergence: iterates can stay in a bounded but huge region (bad case), or iterates stay in a bounded region dependent on some parameters (good case). Indeed, the “convergence” of constant-stepsize SGD is in the sense of “converging to a region with size proportional to the noise variance”. Our result of “converging to bounded region” is also meaningful as the size of the region goes to zero as the noise variance goes to 0 or D0 goes to 0 (realizable case). Note that “divergence” can be also interpreted as “not converging to critical points” which is the notion used in Reddi et al. (2018), instead of “diverging to infinity”. We use the latter concept of “diverging to infinity” for the term “divergence”, because “not converging to critical points” can include the good case of converging to a small region around critical points (like constant-stepsize SGD). In the example of Reddi et al. (2018), a constrained problem is considered (bound constraint [-1,1]), thus divergence to infinity cannot happen. We add an example where the iterates and the gradients can diverge to infinity for small β2; see Appendix A.2. As a corollary of Theorem 4.3, if the optimization problem satisfies SGC (i.e. D0 = 0), RMSprop converges to critical points. Corollary 4.1. Suppose the assumptions of Theorem 4.3 holds. Further, assume (2) holds with D0 = 0, i.e., ∑n−1 j=0 ‖∇fj (x)‖ 2 2 ≤ D1 ‖∇f (x)‖ 2 2 for all x. we have: min t∈(1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) , ∀ T ≥ 4. With the above corollary, the numerical result in Figure 1 should not be surprising: problem (1) satisfies the strong growth condition, and thus there is always a range of β2 inside which RMSprop converges. We just need to tune β2 larger. We can prove similar convergence results for Adam with small β1 and large β2. Theorem 4.4. For optimization problem (3), assume that f is lower-bounded by f∗ and fj is gradient Lipschitz continuous with constant L for all j. Furthermore, assume that fj satisfies (2) for all x. Then, for randomly shuffled Adam with diminishing step size ηt = η1√t and β1, β2 satisfying T1 (β1, β2)+T2 (β2) < 1− 1√2 , we have mint∈[1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) +O ( Q3,5 √ D0 ) ∀ T ≥ 4, where Q3,5 is a constant that approaches 0 in the limit T1 + T2 → 0, T2 is defined in (4), and T1 is defined as T1 (β1, β2) = √ 5dn βn2 dn2D1 β1 βn2 ( 1−β1 1−βn1 + 1 ) . Remark: This result shows that controlling β2 and β1 together can ensure convergence of Adam. We conjecture that the same convergence can be proved for a large range of β1, but we are not able to prove that for now (which is why we focus on RMSprop in this work) and leave it to future work. 5 EXPERIMENTS We conduct experiments of image classification and GAN on MNIST and CIFAR-10 to support our theoretical findings. The details of GAN experiments are in the appendix, and in this section we focus on image classification results. We visualize the optimization trajectory when training on MNIST for small β2 = 0.8 and large β2 = 0.99 in Figure 2. We observe different behaviors: while the trajectory of β2 = 0.8 moves away from the bottom of the basin, for larger β2 the trajectory stays in the level set and has decreasing loss values. In the CIFAR experiments, we use ResNet-18. We choose β2 = 0.8, 0.9, 0.95, 0.99 respectively. With different batch sizes 8, 16, 32, we run each algorithm for 100 epochs without explicit regularization. Table 2 shows two phenomena: first, for fixed batch size, there is a transition point of β2 above which the accuracy suddenly jumps; second, the transition point is closer to 1 as the batch size decreases. More specifically, for batch size 8, the transition point lies in [0.95, 0.99]: the average training accuracy is 44.53% for β2 = 0.95, but jumps to 99.74% for for β2 = 0.99. For batch size 16, the transition point lies in [0.9, 0.95]: the average training accuracy is 67.27% for β2 = 0.9, but jumps to 96.38% for β2 = 0.95. For batch size 16, the transition point lies in [0.8, 0.9]. As batch size increases from 8 to 16 and then 32, then transition point decreases from 0.99 to 0.95 and then to 0.9. These two phenomena are consistent with our theory. The first phenomenon can be explained by Theorem 4.3 and Corollary 4.1 which state that for large enough β2, RMSprop converges. The second phenomenon can be explained by Theorem 4.3 as well: as explained in Remark 2, the required β2 decreases as the number of mini-batches n decreases, i.e., the batch size increases. Next, we demonstrate that the convergence speed of SGD is much slower than Adam under the same experiment setting as Table 2. We compare the average training and test accuracy at the 10-th epoch. As Table 3 shows, the accuracy of Adam is much higher than SGD at the 10-th epoch. All codes generating experimental results are available on the Github repository https://github. com/soundsinteresting/RMSprop 6 CONCLUSION In this work, we study the convergence behavior of RMSprop by taking a closer look at the hyperparameters. Specifically, for realizable problems, we provide a data-dependent threshold of β2 above which we prove the convergence of randomly shuffled RMSprop and small β1 Adam without bounded gradient assumption. We also show that RMSprop converge into a bounded region under non-realizable settings. These findings reveal that there is a critical threshold of β2 regarding the convergence behavior of RMSprop, and the phase transition is supported by the numerical experiments. Our results provide basic guidelines for tuning hyper-parameters in practice. 7 ACKNOWLEDGEMENT M. Hong is supported by NSF grant CMMI-1727757. Ruichen Li from Peking University helped check some proof of Theorem 4.3. We thank all anonymous reviewers for their feedback. We also want to thank Eduard Gorbunov and Juntang Zhuang for pointing out some mistakes on openreview in the earlier versions.
1. What is the focus of the paper regarding Adam family algorithms? 2. What are the strengths of the paper, particularly in terms of analysis and relevance? 3. Are there any areas where the paper could improve, such as comparing convergence regimes or providing more explanation for a specific condition? 4. How do the results contribute to the broader understanding of Adam family algorithms and their performance on modern machine learning tasks? 5. In what ways do the results provide novel insights and remove problematic assumptions from related work?
Review
Review The paper starts off from the recent realization that there exists divergent examples for any set of hyperparameters for algorithms in the Adam family, such as RMSProp. It sets out to study the effect of the beta2 parameter on convergence for a fixed specific problem. The analysis shows that there exists a beta2 < 1 that leads to convergence for realizable problems, and to convergence to a bounded region of interest for non-realizable problems, without requiring a bounded-gradient assumption. Experiments confirm this new theory. Overall, the paper is well-written, clear and easy to read. One of its strongest points is how well the analysis and the relevance of the results is motived. For instance, the importance of removing the assumption on the bounds on the gradient because it effectively removes one of the convergence/divergence regimes is well executed. There is also significant efforts on providing clear simplified examples from rather complex theorems, which is very appreciated (e.g. Corollary 4.1). Further, there is a real effort to contrast the results with the previous work, and to explain how it complements them, resolving clearly what initially appears as direct contradictions. The results are relevant, both from the point of view of the theory, where it adds to a body of work explaining how and why the Adam family of algorithm performs well on modern machine learning taskloads, and from the point of view of the practitioner, outlining what hyperparameter tuning is necessary to achieve convergence. They are also original, in the sense that they provide novel insights, while removing problematic assumptions that permeate most of the related work. A couple of things could be improved: as pointed out in the paper, if beta2 = 1, the algorithm degenerates to SGD. While there is a remark explaining why as long as beta2 < 1 the two algorithms differ, it would be informative to compare the convergence regimes with high beta2 to SGD directly, to validate that there exists a set of hyperparameters that not only provide convergence, but improved convergence properties compared to SGD (otherwise the results are a lot less relevant), as well as give an order of magnitude of what value is typically necessary for beta2. condition (4) in theorem 4.3 is quite difficult to apprehend, with a slightly worrying beta2^n term. More exegesis would be beneficial for reader comprehension. Overall, this is a nice, well-written and relevant paper that clears the bar for publication in its current version.
ICLR
Title RMSprop converges with proper hyper-parameter Abstract Despite the existence of divergence examples, RMSprop remains one of the most popular algorithms in machine learning. Towards closing the gap between theory and practice, we prove that RMSprop converges with proper choice of hyperparameters under certain conditions. More specifically, we prove that when the hyper-parameter β2 is close enough to 1, RMSprop and its random shuffling version converge to a bounded region in general, and to critical points in the interpolation regime. It is worth mentioning that our results do not depend on “bounded gradient" assumption, which is often the key assumption utilized by existing theoretical work for Adam-type adaptive gradient method. Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSprop. Finally, based on our theory, we conjecture that in practice there is a critical threshold β∗ 2 , such that RMSprop generates reasonably good results only if 1 > β2 ≥ β∗ 2 . We provide empirical evidence for such a phase transition in our numerical experiments. 1 INTRODUCTION RMSprop (Tieleman & Hinton, 2012) remains one of the most popular algorithms for machine learning applications. As a non-momentum version of a more general algorithm Adam, RMSprop’s good empirical performance has been well acknowledged by practitioners in generative adversarial networks (GANs) (Seward et al., 2018; Yazıcı et al., 2019; Karnewar & Wang, 2020; JolicoeurMartineau, 2019), reinforcement learning (Mnih et al., 2016), etc. In spite of its prevalence, however, Reddi et al. (2018) discovered that RMSprop (as well as the more general version Adam) can diverge even for simple convex functions. To fix the algorithm, the authors of Reddi et al. (2018) proposed a new variant called AMSGrad, which is guaranteed to converge under certain conditions. Since then, it has been an active area of research to design provably convergent variants of RMSprop. These variants include AdaFom (Chen et al., 2019), Adabound (Luo et al., 2019), Nostalgic Adam (Huang et al., 2019), Yogi (Zaheer et al., 2018), and many more. Despite the variants, the vanilla RMSprop indeed works well in practice, and after proper hyper-parameter tuning, the non-convergence issue has not been commonly observed. Why is there a large gap between theory and practice? Is this because the real-world problems are likely to be “nice”, or is it because the theoretical analysis of RMSprop does not match how it is used in practice? With the above questions in mind, we revisited the counter-example of Reddi et al. (2018), and found an interesting phenomenon. One counter-example of Reddi et al. (2018) is the following: ft(x) = { Cx, for t mod C = 1 −x, otherwise (1) where x ∈ [−1, 1]. They proved the divergence under the condition β2 ≤ min{C− 4 C−2 , 1− ( 9 2C )2}, where β2 is the second order momentum coefficient in Algorithm 1 (the algorithm is presented later). For instance, when C = 10, then the algorithm diverges if β2 < 0.3. Reddi et al. (2018) mentioned ∗IOE, University of Michigan, [email protected]. Part of the work was done when Naichen Shi was working with Prof. Ruoyu Sun as an intern. †ISE, University of Illinois at Urbana-Champaign. [email protected] ‡ECE, University of Minnesota - Twin Cities, [email protected]. §University of Illinois at Urbana-Champaign. [email protected]. Corresponding author: Ruoyu Sun. that “this explains why large β2 is advisable while using Adam algorithm”, but they did not analyze whether large β2 leads to convergence in their example. We ran simulation for problem (1) with different β2 and found there is always a threshold of β2 above which RMSprop converges, see Figure 1. For instance, when C = 10, the transition point of β2 is roughly 0.955: the algorithm converges if β2 > 0.956 but diverges if β2 < 0.955. In general, there is a curve of phase transition from divergence to convergence, and such a curve slopes upward, which means the transition point is closer to 1 if C becomes larger. Based on this observation, we make the following conjecture: Conjecture: RMSprop converges if β2 is large enough. Before further discussion, we introduce the following assumption. Assumption 1.1. f(x) = ∑n−1 j=0 fj(x), and n−1∑ j=0 ‖∇fj (x)‖22 ≤ D1 ‖∇f (x)‖ 2 2 +D0. (2) We divide optimization problems into 2 classes: realizable problems where D0 = 0 and non-realizable problems where D0 > 0. When D0 = 0, the assumption (1.1) becomes∑n−1 j=0 ‖∇fj(x)‖ 2 2 ≤ D1 ‖∇f(x)‖ 2 2 , which is called “strong growth condition” (SGC) (Vaswani et al., 2019). It requires the norm of the stochastic gradient to be proportional to the batch gradient norm. When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. For linear regression problems, SGC holds if the linear model can fit all data. More specifically, for the problem minx ‖Ax‖2 = ∑n j=1 ( aTj x )2 where A is an n by n matrix and aTj is the j-th row vector of A, SGC holds with D1 ≤ λmax (∑n i=1 aia T i aia T i ) /λmin ( ATA ) (Raj & Bach, 2020). SGC can be viewed as a simple condition that models overparameterized neural networks capable of interpolating all data points (Vaswani et al., 2019). Therefore, in this work we use the terminology “realizable problems” to refer to the problems that satisfy SGC. 1.1 MAIN CONTRIBUTIONS In an attempt to resolve the conjecture, we delve into RMSprop’s convergence issues and obtain a series of theoretical and empirical results. Our contributions are summarized below: • We find that RMSprop’s convergence is contingent on the choice of β2. For general optimization problems, there are two types of hyper-parameters: problem-dependent hyperparameters such as step size in GD, and universal constants such as momentum coefficient in heavy ball method 1. Our result reveals that β2 is closer to the first type. • We prove that RMSprop converges to stationary point for realizable problems (interpolation regime), and to some bounded region for non-realizable problems. Combining with the divergence example of RMSprop, this indicates the existence of a phase transition from divergence to convergence dependent on β2. Note that when we say “convergence”, in a weak sense it means the sequence converges to a bounded region for non-realizable case; and in a strong sense it means the sequence converges to stationary points for realizable case. • To our best knowledge, we are the first to prove the convergence of RMSprop and some of Adam without any form of assumption about the boundedness of the gradient norm. This is important for showing the transition: with added assumptions on bounded gradients, the gradients cannot diverge, while the counter-example shows that the gradient can. 2 PRELIMINARIES We consider a finite-sum problem: min x∈Rd f(x) = n−1∑ j=0 fj(x). (3) In neural network training, fj usually represents the loss contributed by the j-th sample batch. We present randomly shuffled Adam in Algorithm 1. RMSProp is the special case of Adam with β1 = 0. In this work, we mainly focus on RMSprop; nevertheless, we will present a result for a special case of Adam with small β1. Algorithm 1 Randomly Shuffled Adam Initialize m1,−1 = 11−β1∇f(x0) and v1,−1 = 1 1−β2 maxj{∇fj(x0) ◦ ∇fj(x0)}. for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i vk,i = β2vk,i−1 + (1− β2)∇fτk,i ◦ ∇fτk,i xk,i+1 = xk,i − ηk∗n√vk,i+ ◦ml,k,i end for Break if certain stopping criterion is satisfied. xk+1,0 = xk,n, vk+1,−1 = vk,n−1, mk+1,−1 = mk,n−1 end for return x In Algorithm 1, x denotes the optimization variable, m denotes the first order momentum and v denotes the second order momentum. Specifically, we denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop and i-th inner loop, respectively. We denote∇fj as the gradient of fj and let ◦ be the component-wise multiplication. The division of two vectors is component-wise as well. Moreover, we denote ηt as the step-size and β1, β2 as the hyper-parameters in the algorithm. When n = 1, we obtain full batch Adam. We replaced the bias correction step in (Kingma & Ba, 2015) with a special initialization on m1,−1 and v1,−1. This initialization can also correct the bias, but has cleaner results. Since the effect of initialization or bias correction becomes more and more negligible as the training progresses, RMSprop with zero initialization or our initialization will have the same asymptotic behavior. We put our results for the original version of RMSprop in the appendix. 1Rigorously speaking, for the best convergence rate, the momentum coefficient should also be problemdependent; but just for achieving convergence, it can be problem independent. As for hyper-parameters, we choose ηt = η1√t and fix β2 to be a constant that is independent of the iteration count. We allow to be an arbitrary non-negative constant; in particular, our result holds even for = 0. The constant is added in practice for numerical stability, and is typically chosen to be 10−6 or even 10−8. It is much smaller than√vk,i (which is roughly the size of gradient norm). 2.1 RELATED WORK As discussed earlier, one line of research focuses on variants of RMSprop and Adam that can be proved to converge. These works usually modify the update rule of vt. For instance, AMSGrad (Reddi et al., 2018), AdaFom (Chen et al., 2019) explicitly make vt non-decreasing. Nostalgic Adam (Huang et al., 2019) and the algorithms analyzed in Zou et al. (2019) and Chen et al. (2019) use iteration-dependent β2t (and/or β1t) to let vt weigh more on past gradients. Some works add new modifications into RMSprop and Adam; for instance, Zhou et al. (2019) mitigate the bias in update direction by using a different estimate of vt, Dozat (2016) combine Adam with Nesterov momentum, and Liu et al. (2020a) employ a warm-up technique. Besides modifying the algorithm, a few attempts have been made to address the non-convergence issues of the original versions, but they often rely on extra assumptions. A number of works (Zaheer et al., 2018; De et al., 2019; Défossez et al., 2020) prove the convergence of Adam under these additional assumptions. One representative work along this line, Défossez et al. (2020), establishes a clean convergence result and also provides some insights on the momentum mechanisms by improving the dependence of the iteration complexity on 1− β1. However, these works assume to be relatively large compared to √vk,i. The issue is that such a choice essentially transforms RMSprop back to SGD since the effective step size is primarily controlled by , in lieu of √vk,i. This is in contrary to the spirit of RMSprop, which is to use adaptive step size to accelerate convergence. A few other works do not need the assumption of , but they have other assumptions. De et al. (2018) analyze deterministic and stochastic RMSprop, but they utilize a rather unrealistic assumption that the sign of all noisy gradients are the same, i.e., sign(∇fp(x)) = sign(∇fq(x)) for all p, q. Chen et al. (2019) describe a few quantities based on the iterates, and prove that if they grow in a certain speed as the iterates go, the algorithm converges. The drawback is that the condition cannot be checked a priori. Besides the assumptions mentioned above, all the aforementioned works require the gradient to be bounded. In general, removing boundedness assumptions (of any kind, including bounded gradient, bounded iterates, etc.) is not necessarily easy. Thus, such results are appreciated even for basic SGD. For instance, Bertsekas & Tsitsiklis (2000) presents a nice discussion of various results on inexact GD without involving conventional bounded assumptions, and claims “bounded-assumption-free” as one of the main contributions of their work. Very recently, we notice another work (Liu et al., 2020b) which removes the bounded gradient assumption for SGDM (SGD with momentum) and obtains satisfactory rates. Nevertheless, we are not aware of an existing result on RMSprop that does not require bounded gradient assumption. We will explain later why removing this bounded gradient assumption is particularly important for our paper. 3 THE raison d’être FOR β2 Figure 1 clearly demonstrates the important role of β2 in the convergence of RMSprop. Specifically, a sufficiently large β2 is critical for RMSprop’s convergence. Indeed, some recent works (Reddi et al., 2018; Zhou et al., 2019) have also made similar arguments, but they focus on understanding one part of the phenomenon, that is, small β2 leads to divergence. Our goal in this work is to complete the other part of the story by showing that, sufficiently large β2 guarantees convergence. The formal result will be provided in Sec. 4. To understand the function of β2, we first discuss why RMSprop diverges. It is known that the stochastic noise due to mini-batch will distort the gradient direction, leading to possible divergence, but in standard SGD, the distortion in multiple iterations is eliminated since the stochastic gradient is an unbiased estimate of the gradient. For RMSprop, at a given iteration the scaling constant 1/ √ v in the update direction may cause larger gradient distortion than the standard SGD. The distortion can be so significant that the average updating direction falls outside the dual cone of the true gradient. To illustrate this, consider the extreme case that β2 = 0 and = 0 (i.e., signSGD) and the special example (1). When applying signSGD to solve (1), in each epoch which consists of C iterations, one iteration will move x left followed by C − 1 iterations that move x right. Since all step sizes are the same in one epoch, the accumulated effect of one epoch makes x move in the ascending direction, instead of the descending direction. Then why does large β2 help? Intuitively, a large β2 can control the distortion on update directions. In the extreme case that β2 = 1 and = 0, RMSprop reduces to SGD where the distortion of multiple iterations can be mitigated, leading to convergence. We suspect that β2 does not need to be exactly 1, and a large β2 is enough to control the distortion. Our experiment in Figure 1 confirms that, at least for the counter-example of Reddi et al. (2018), there is an interval β2 ∈ [c, 1] such that RMSprop converges. What was initially not clear is whether the counter-example of Reddi et al. (2018) is a very special case or the convergence of large-β2-RMSprop holds for all problems. We found the real situation is somewhat more tricky. For non-realizable problems, we discovered an example for which RMSprop cannot converge to the minimum for a wide range of β2 < 1, but unlike the small-β2-case the iterates converge to a small ball around the minimum. This motivates us to distinguish three convergentsituations: divergence, convergence to a small region, convergence to critical points. What we can prove for the general problem is (see Theorem 4.3): for small β2, RMSprop can diverge; for large β2, RMSprop must converge to a small region whose size depends on β2. Then why do we observe the convergence to a single point in the experiment for (1)? We suspect this is because the problem (1) is realizable, and conjecture that the property of “convergence to critical points” holds for all realizable problems. We indeed prove this conjecture (see Corollary 4.1): large-β2-RMSprop converges to critical points if the problem satisfies SGC. We summarize our findings about the convergence properties of RMSprop in Table 1. Note that our results do not conflict with Theorem 3 in Reddi et al. (2018) which claims that “for any constant β1 and β2 there exists a divergent example” since here we choose β2 to be problemdependent, just like one chooses a step size < 2/L for GD where L is a problem dependent parameter. Another remark is that though β2 could be close to 1, RMSprop still retains the ability to adapt v to gradient square norm as long as β2 < 1, because new gradient signals are added for each iteration and the impact of previous signals decays exponentially. It is the adaptive ability that distinguishes RMSprop from SGD. Proving the theoretical advantage of RMSprop over SGD (i.e., choosing β2 < 1 is better than β2 = 1) is a very intriguing question; in general, the theoretical advantage of adaptive gradient methods (including RMSprop and AdaGrad) over SGD is a long standing question. in this work, we focus on the fundamental problem of convergence, instead of the more challenging question of justifying the advantage of RMSprop. 4 CONVERGENCE RESULTS In this section, we present the formal theoretical results. We start from the results of full batch RMSprop/Adam, and then present the results for the stochastic versions. Note that random shuffling is not a key factor, and the proof works for other settings. 4.1 FULL-BATCH VERSION We first consider the full-batch version of RMSprop. The following theorem shows that if we use all samples to evaluate the gradient, RMSProp with diminishing stepsize converges to critical points regardless of the choice of β2. Here we consider one popular step size schedule that ηt = η1√t . Theorem 4.1. (convergence of full-batch RMSprop) For problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch RMSprop (Alg. 1 with β1 = 0, = 0) with diminishing step size ηt = η1√t and any β2 ∈ (0, 1), we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) where T > 0 is the total iteration number. De et al. (2019) also proves the convergence of full batch RMSprop, but they require the gradient norm to be upper bounded; in contrast, we do not need this assumption, and only require lower-boundedness and L-smoothness of f . Our result suggests that the convergence property of batch-RMSprop is similar to signSGD, an algorithm that only uses the sign of gradient to calculate its descent direction (Bernstein et al., 2018): in the full-batch setting, signSGD (which can be called sign GD) has also been proved to converge without bounded gradient assumption. Below, we also derive an analogous result for full-batch Adam with only one additional constraint β1 < √ β2 < 1, which is often satisfied in practice: Theorem 4.2. (convergence of full-batch Adam) For optimization problem (3) with n = 1, assume that f is gradient Lipschitz continuous with constant L and lower bounded by f∗. Then, for full-batch Adam with diminishing step size ηt = η1√t and any β1 < √ β2 < 1, we have: min t∈(1,T ] ‖∇ft‖1 ≤ O ( log T√ T ) . 4.2 STOCHASTIC VERSIONS As mentioned earlier, our simulation shows that RMSprop may not converge to critical points for non-realizable problems (an example is provided in the appendix). Nevertheless, we can still show randomly shuffled large-β2-RMSprop converges to a bounded region: Theorem 4.3. (large-β2 RMSprop converge to a region) For problem (3), assume f is lower-bounded by f∗ and all ∇fj is L-Lipschitz continuous. Furthermore, assume (2) holds, and β2 satisfies T2 (β2) , √ 10dn βn2 dnD1 (1− β2) ( 4n2 βn2 − 1 ) 2 + ( 1√ βn2 − 1 ) ≤ √2− 1 2 √ 2 , (4) Then, for randomly shuffled RMSprop with ηt = η1√t , we have min t∈(1,T ] min{‖∇fnt‖1 , ‖∇fnt‖ 2 2 √ D1d D0 } ≤ O ( log T√ T ) +O ( Q3,3 √ D0 ) , ∀ T ≥ 4. Here Q3,3 > 0 is a β2-dependent constant that goes to zero in the limit as β2 → 1. Remark 1. This result and the result in Reddi et al. (2018) together distinguish large-β2-RMSprop and small-β2-RMSprop: the former converges to a bounded region, while the latter can diverge. Note that there is a gap between the lower bound of β2 and the upper bound of β2 in the counter-example. We do not try to provide tight bounds on the threshold of β2 , as our main goal is to show a qualitative difference between large-β2-RMSprop and small-β2-RMSprop. Remark 2: Condition (4) in Theorem 4.3 implies that 1− β2 ≤ O ( n−3.5 ) . In the appendix we introduce three problem-dependent parameters ρ1 ∈ [1, √ n], ρ2 ∈ [0, n], and ρ3 ∈ [1, √ n] in equations(14), (15) and (16), and improve the sufficient condition (4) to 1− β2 ≥ O (1/ (nρ1ρ2ρ3)). For the worst case, the bound is O ( n−3.5 ) , just like condition (4) in Theorem 4.3. In actual training process, ρ1, ρ2, and ρ3 may not reach their upper bounds, thus the threshold of β2 can be lower in practice (see Appendix A.5 for some empirical estimate of ρi’s). The dependence on the number of batches n suggests that as n increases, the required hyper-parameter β2 should be larger. This is understandable since more minibatches means larger noise in the stochastic gradient, and thus larger β2 is required. There is a gap between our theoretical bound of β2 and the empirical transition point, and it is an interesting future question to close this gap. Remark 3. We point out three possible algorithm behaviors: divergence to infinity (or divergence for short), convergence to a bounded region (or non-divergence for short) and convergence to critical points. We distinguish the three cases, making it easier to explain the qualitative difference of small-β2 and large-β2 regime. For non-realizable cases, the phase transition is from divergence to non-divergence. Therefore, it is important to discard the bounded-gradient assumption: this assumption eliminates the possibility of divergence of gradients a priori. To be clear, there are actually two sub-cases of non-divergence: iterates can stay in a bounded but huge region (bad case), or iterates stay in a bounded region dependent on some parameters (good case). Indeed, the “convergence” of constant-stepsize SGD is in the sense of “converging to a region with size proportional to the noise variance”. Our result of “converging to bounded region” is also meaningful as the size of the region goes to zero as the noise variance goes to 0 or D0 goes to 0 (realizable case). Note that “divergence” can be also interpreted as “not converging to critical points” which is the notion used in Reddi et al. (2018), instead of “diverging to infinity”. We use the latter concept of “diverging to infinity” for the term “divergence”, because “not converging to critical points” can include the good case of converging to a small region around critical points (like constant-stepsize SGD). In the example of Reddi et al. (2018), a constrained problem is considered (bound constraint [-1,1]), thus divergence to infinity cannot happen. We add an example where the iterates and the gradients can diverge to infinity for small β2; see Appendix A.2. As a corollary of Theorem 4.3, if the optimization problem satisfies SGC (i.e. D0 = 0), RMSprop converges to critical points. Corollary 4.1. Suppose the assumptions of Theorem 4.3 holds. Further, assume (2) holds with D0 = 0, i.e., ∑n−1 j=0 ‖∇fj (x)‖ 2 2 ≤ D1 ‖∇f (x)‖ 2 2 for all x. we have: min t∈(1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) , ∀ T ≥ 4. With the above corollary, the numerical result in Figure 1 should not be surprising: problem (1) satisfies the strong growth condition, and thus there is always a range of β2 inside which RMSprop converges. We just need to tune β2 larger. We can prove similar convergence results for Adam with small β1 and large β2. Theorem 4.4. For optimization problem (3), assume that f is lower-bounded by f∗ and fj is gradient Lipschitz continuous with constant L for all j. Furthermore, assume that fj satisfies (2) for all x. Then, for randomly shuffled Adam with diminishing step size ηt = η1√t and β1, β2 satisfying T1 (β1, β2)+T2 (β2) < 1− 1√2 , we have mint∈[1,T ] ‖∇fnt‖1 ≤ O ( log T√ T ) +O ( Q3,5 √ D0 ) ∀ T ≥ 4, where Q3,5 is a constant that approaches 0 in the limit T1 + T2 → 0, T2 is defined in (4), and T1 is defined as T1 (β1, β2) = √ 5dn βn2 dn2D1 β1 βn2 ( 1−β1 1−βn1 + 1 ) . Remark: This result shows that controlling β2 and β1 together can ensure convergence of Adam. We conjecture that the same convergence can be proved for a large range of β1, but we are not able to prove that for now (which is why we focus on RMSprop in this work) and leave it to future work. 5 EXPERIMENTS We conduct experiments of image classification and GAN on MNIST and CIFAR-10 to support our theoretical findings. The details of GAN experiments are in the appendix, and in this section we focus on image classification results. We visualize the optimization trajectory when training on MNIST for small β2 = 0.8 and large β2 = 0.99 in Figure 2. We observe different behaviors: while the trajectory of β2 = 0.8 moves away from the bottom of the basin, for larger β2 the trajectory stays in the level set and has decreasing loss values. In the CIFAR experiments, we use ResNet-18. We choose β2 = 0.8, 0.9, 0.95, 0.99 respectively. With different batch sizes 8, 16, 32, we run each algorithm for 100 epochs without explicit regularization. Table 2 shows two phenomena: first, for fixed batch size, there is a transition point of β2 above which the accuracy suddenly jumps; second, the transition point is closer to 1 as the batch size decreases. More specifically, for batch size 8, the transition point lies in [0.95, 0.99]: the average training accuracy is 44.53% for β2 = 0.95, but jumps to 99.74% for for β2 = 0.99. For batch size 16, the transition point lies in [0.9, 0.95]: the average training accuracy is 67.27% for β2 = 0.9, but jumps to 96.38% for β2 = 0.95. For batch size 16, the transition point lies in [0.8, 0.9]. As batch size increases from 8 to 16 and then 32, then transition point decreases from 0.99 to 0.95 and then to 0.9. These two phenomena are consistent with our theory. The first phenomenon can be explained by Theorem 4.3 and Corollary 4.1 which state that for large enough β2, RMSprop converges. The second phenomenon can be explained by Theorem 4.3 as well: as explained in Remark 2, the required β2 decreases as the number of mini-batches n decreases, i.e., the batch size increases. Next, we demonstrate that the convergence speed of SGD is much slower than Adam under the same experiment setting as Table 2. We compare the average training and test accuracy at the 10-th epoch. As Table 3 shows, the accuracy of Adam is much higher than SGD at the 10-th epoch. All codes generating experimental results are available on the Github repository https://github. com/soundsinteresting/RMSprop 6 CONCLUSION In this work, we study the convergence behavior of RMSprop by taking a closer look at the hyperparameters. Specifically, for realizable problems, we provide a data-dependent threshold of β2 above which we prove the convergence of randomly shuffled RMSprop and small β1 Adam without bounded gradient assumption. We also show that RMSprop converge into a bounded region under non-realizable settings. These findings reveal that there is a critical threshold of β2 regarding the convergence behavior of RMSprop, and the phase transition is supported by the numerical experiments. Our results provide basic guidelines for tuning hyper-parameters in practice. 7 ACKNOWLEDGEMENT M. Hong is supported by NSF grant CMMI-1727757. Ruichen Li from Peking University helped check some proof of Theorem 4.3. We thank all anonymous reviewers for their feedback. We also want to thank Eduard Gorbunov and Juntang Zhuang for pointing out some mistakes on openreview in the earlier versions.
1. What is the focus of the paper regarding machine learning? 2. What are the strengths of the proposed approach, particularly in terms of its practicality and interest in the machine learning community? 3. Do you have any concerns or suggestions regarding the clarity and quality of the paper's content? 4. How does the reviewer assess the significance of the paper's contributions to the field of machine learning?
Review
Review Summary: The paper studies one of the most popular algorithms in machine learning: RMSprop. More specifically, it investigates the relation between the hyper-parameters and the convergence of the algorithm. By proving the convergence without using "bounded gradient" assumption, the authors establish a phase transition from divergence to non-divergence for RMSProp. Pros: The paper concerns one of the most important algorithms in machine learning. In my opinion, the problem is practical and of interest in machine learning community. The results of the paper provide explicit conditions for the hyper-parameters of RMSprop/Adam that ensure the convergence of the algorithms. These results provide basic guidelines for tuning hyper-parameters of the algorithms in practice. Cons: Apart from the strong points, I still have some concerns about the clarity of the paper. I hope the authors can address my concerns to improve the quality of the paper. The parameter β 2 is the most important subject of the paper. Until algorithm 1, the paper discusses β 2 without defining it clearly. It would be more clear if β 2 is mentioned from the beginning of the paper that it comes form algorithm 1. The authors divide the problems into 2 sub-classes to investigate: realizable and non-realizable, which are not clearly defined. It would be better if the authors can define these 2 sub-classes more formally. The experiments supporting the theoretical results are comprehensible. However, I would suggest the authors provide a figure with x-axis to be epochs and y-axis to be accuracy so that the readers can have better idea upon how SGD and RMSProp behave during training.
ICLR
Title Active Deep Probabilistic Subsampling Abstract Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for all datapoints. We generalize DPS to a sequential method that actively picks the next sample based on the information acquired so far; dubbed Active-DPS (ADPS). We validate that A-DPS improves over DPS for MNIST classification at high subsampling rates. We observe that A-DPS learns to actively adapt based on the previously sampled elements, yielding different sampling sequences across the dataset. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods. 1 INTRODUCTION Present-day technologies produce and consume vast amounts of data, which is typically acquired using an analog-to-digital converter (ADC). The amount of data digitized by an ADC is determined not only by the temporal sampling rate, but also by the manner in which spatial acquisitions are taken, e.g. by using a specific design of sensor arrays. Reducing the number of sample acquisitions needed, can lead to meaningful reductions in scanning time, e.g. in Magnetic Resonance Imaging (MRI), radiation exposure, e.g. in Computed Tomography (CT), battery drain, and bandwidth requirements. While the Nyquist theorem is traditionally used to provide theoretical bounds on the sampling rate, in recent years signal reconstruction from sub-Nyquist sampled data has been achieved through a framework called Compressive Sensing (CS). First proposed by Donoho (2006), and later applied for MRI by Lustig et al. (2007), CS leverages structural signal priors, specifically sparsity under some known transform. By taking compressive measurements followed by iterative optimization of a linear system under said sparsity prior, reconstruction of the original signal is possible while sampling at sub-Nyquist rates. Researchers have employed CS with great success in a wide variety of applications, such as radar (Baraniuk & Steeghs, 2007; Ender, 2010), seismic surveying (Herrmann et al., 2012), spectroscopy (Sanders et al., 2012), and medical imaging (Han et al., 2016; Lai et al., 2016). However, both the need to know the sparsifying basis of the data, and the iterative nature of the reconstruction algorithms, still hamper practical applicability of CS in many situations. These limitations can be overcome by the use of deep learning reconstruction models that make the sparsity assumption implicit, and facilitate non-iterative inference once trained. Moreover, the (typically random) nature of the measurement matrix in CS does, despite adhering to the given assumptions, not necessarily result in an optimal measurement given the underlying data statistics and the downstream system task. This has recently been tackled by algorithms that learn the sampling scheme from a data distribution. In general, these data-driven sampling algorithms can be divided into two categories: algorithms that learn sampling schemes which are fixed once learned (Huijben et al., 2020a;b;c; Ravishankar & Bresler, 2011; Sanchez et al., 2020; Bahadir et al., 2019; Bahadir et al., 2020; Weiss et al., 2019), and algorithms that learn to actively sample (Ji et al., 2008; Zhang et al., 2019; Jin et al., 2019; Pineda et al., 2020; Bakker et al., 2020); selecting new samples based on sequential acquisition of the information. The former type of algorithms learn a sampling scheme that - on average - selects informative samples of all instances originating from the training distribution. However, when this distribution is multi-modal, using one globally optimized sampling scheme, can easily be sub-optimal on instance-level. Active acquisition algorithms deal with such shifts in underlying data statistics by conditioning sampling behavior on previously acquired information from the instance (e.g. the image to be sampled). This results in a sampling sequence that varies across test instances, i.e. sampling is adapted to the new data. This adaptation as a result of conditioning, promises lower achievable sampling rates, or better downstream task performance for the same rate, compared to sampling schemes that operate equivalently on all data. In this work, we extend the Deep Probabilistic Subsampling (DPS) framework (Huijben et al., 2020a) to an active acquisition framework by making the sampling procedure iterative and conditional on the samples already acquired, see Fig. 1. We refer to our method as Active Deep Probabilistic Subsampling (A-DPS). We show how A-DPS clearly exploits the ten different modalities (i.e. the digits) present in the MNIST dataset to adopts instance-adaptive sampling sequences. Moreover, we demonstrate both on MNIST (LeCun et al., 1998) and the real-world fast MRI knee dataset (Zbontar et al., 2018), that A-DPS outperforms other state-of-the-art models for learned sub-Nyquist sampling. We make all code publicly available upon publication, in order to facilitate benchmarking to all provided baselines and A-DPS in future research. 2 RELATED WORK Recently, several techniques for learning a fixed sampling pattern have been proposed, especially in the field of MR imaging, in which Ravishankar & Bresler (2011) were one of the firsts. In this work, the authors make use of non-overlapping cells in k-space, and move samples between these cells.During training Ravishankar & Bresler (2011) alternate between reconstruction and relocation of sampling positions. After a reconstruction step they sort the cells in terms of reconstructing error and an infinite-p norm. Selected samples from lower scoring cells are relocated to higher scoring cells in a greedy fashion. Sanchez et al. (2020) also propose a greedy approach, in which samples are not relocated between cells, but greedily chosen to optimize a reconstruction loss on a batch of examples. Both of the types of greedy optimization do however not allow for joint learning of sampling together with a downstream reconstruction/task model, as the reconstruction has to either be parameter-free or pretrained to work well with a variety of sampling schemes. Bahadir et al. (2019) on the other hand propose to learn the sampling pattern by thresholding pixelbased i.i.d. samples drawn from a uniform distribution, dubbed Learning-based Optimization of the Under-sampling PattErn (LOUPE). The sample rate of LOUPE is indirectly controlled by promoting sparsity through the use of an `1 penalty on the thresholds. One of the first active sampling schemes was proposed by Ji et al. (2008), who leverage CS reconstruction techniques that also give a measure of uncertainty of the reconstruction using Bayesian modeling. Ji et al. (2008) leveraged this uncertainty in the reconstruction to adaptivly select the next measurement that will reduce this uncertainty by the largest amount. However, this method - and other similar works from (Carson et al., 2012; Li et al., 2013) - rely on linearly combined measurements, rather than discrete sampling, with which we concern ourselves here. In the field of MRI, Zhang et al. (2019) propose an active acquisition scheme by leveraging a reconstruction and adversarial neural network. Whereas the reconstruction network is trained to reconstruct MR images from the subsampled Fourier space (k-space), the adversarial network is trained to distinguish between already sampled, and omitted lines in this space. The k-space line that is most believed to be ‘fake’ (i.e. filled in by the reconstruction network) by the adversarial network, is sampled next. However, This framework only works for undersampled Fourier to image reconstruction tasks, as the discriminator requires mappings of the image in k-space. Jin et al. (2019) put forth an active acquisition scheme for MRI by leveraging reinforcement learning (RL). Two neural networks, one for sampling and one for reconstruction are trained jointly using a Monte-Carlo tree search, resulting in a sampling policy that is dependent on the current reconstruction of the image. Concurrently with our work, both Pineda et al. (2020) and Bakker et al. (2020) proposed RL-based active acquisition techniques. Pineda et al. (2020) leverages a Double Deep Q-Network. The model is trained using a modified -greedy policy, in which the best action is taken with probability 1− , and an exploratory action is taken with probability . Bakker et al. (2020) compare greedy with nongreedy training, finding that the greedy method leads to a higher degree of adaptability, especially for tasks with a long horizon (i.e. more samples to be taken). Both of the frameworks proposed by (Pineda et al., 2020) and Bakker et al. (2020) make use of a pretrained reconstruction network, which differs from the proposed A-DPS method that enables joint training of both the reconstruction (task) network and sampling network. Even though subsampling is an extreme form of data compression, we differentiate from typical data compression architectures like deep encoder-decoder structures (Theis et al., 2017; Ballé et al., 2017), as these methods do not reduce data rates at the measurement stage. The feedback recurrent autoencoder proposed by Yang et al. (2020) is however related to A-DPS through its use of a recurrent context. But whereas Yang et al. (2020) learn a recurrent context to inform the encoder stage of the network, A-DPS uses this to inform the sampling pattern. 3 METHOD 3.1 GENERAL FRAMEWORK Given a prediction task s we are interested in learning to predict an optimal subsampling scheme A ∈ {0, 1}M×N (with M N ) on an input signal x ∈ RN , resulting in a measurement ỹ ∈ RM : ỹ = Ax. (1) Each row in A is constrained to have `0-norm of 1, while each column in A is constrained to have an `0-norm of either 0 or 1, i.e. each of the N candidate samples is selected at most once. In the rest of this paper we will index these candidate samples with n ∈ {1, . . . , N}, and the selected samples with m ∈ {1, . . . ,M}. The percentage of selected samples from the candidate samples is called the sampling ratio r =M/N · 100%. We also introduce a non-compressed form of the measurement ỹ, called y ∈ RN , that contains N −M zeros, and M non-zeros at the sampled indices specified by A, i.e. the masked input. This way, the location of samples from x is preserved, which is especially useful whenA changes during training. To acquire y from x, one seeks a subsampling mask d that can be applied on x via: y = d x = ATAx, (2) where denotes an element-wise multiplication. From the resulting measurement y we then aim at predicting the downstream task s through: ŝ = fθ(y), (3) where fθ(.) is a function that is differentiable with respect to its input and parameters θ, e.g. a neural network. Normally, optimization of the task model fθ(.) is achieved through backpropagation of some loss function L(s, ŝ). However, calculating gradients on the sampling matrix is blocked by its combinatorial nature, inhibiting joint training of the task with the sampling operation. The DPS framework provides a solution to this problem, on which we will elaborate in the next section. 3.2 DPS: DEEP PROBABILISTIC SUBSAMPLING To enable joint training of the sampling operation with the downstream task model, Huijben et al. (2020a) introduce DPS. Rather than optimizing A directly, they propose to optimize a generative sampling model P (A|φ), whereφ are learned unnormalized logits of (possibly multiple) categorical distribution(s). Each distribution expresses the probabilities for sampling any of the elements xn fromx through sampling matrixA. More specifically, φm,n is the log-probability for setting am,n = 1, and thus sampling xn as mth sample. To generate a sampling pattern from these unnormalized logits, i.e. implementation of this conditional model, the Gumbel-max trick is leveraged (Gumbel, 1954). In the Gumbel-max trick the unnormalized logits are perturbed with i.i.d. Gumbel noise samples em,n ∼ Gumbel(0, 1). By selecting the maximum of this perturbation a realization of the sampling mask can be found using: Am,: = one-hotN { argmax n {wm−1,n + φm,n + em,n} } , (4) where Am,: denotes the m-th row of A and one-hotN creates a one-hot vector of length N , with the one at the index specified by the argmax operator. Moreover, the cumulative mask wm−1,n ∈ {−∞, 0} masks previously selected samples by adding minus infinity to those logits, thereby ensuring sampling without replacement. During backpropagation, gradients are computed by relaxing this sampling procedure using the Gumbel-softmax trick (Jang et al., 2016; Maddison et al., 2017), resulting in: ∇φmAm,: := ∇φmEem [softmaxτ {wm−1,n + φm,n + em,n}] , (5) where τ denotes the temperature parameter of the softmax operator. Setting τ > 0 results in a smoothed sampling matrix A (i.e. elements can have values between 0 and 1 as well), allowing gradients to distribute over multiple logits during training. In the limit of τ → 0 the softmax operator approaches the one-hot argmax function of equation (4). Although this approach – also known as straight-through Gumbel-softmax – leads to biased gradients, it has been shown to work well in practice, and Huijben et al. (2020a) keep τ at a fixed value during training. Huijben et al. (2020a) propose two regimes of DPS. First, Top-1 sampling, an expressive form of DPS where each of the M selected samples are separately conditioned on all N candidate samples, resulting in M ×N trainable logits φm,n. Second, Top-M sampling (called Top-K in their paper), a constrained form where all M samples together are conditioned on all N candidate samples, i.e. the logits φn are shared between the M rows of A, resulting in only N trainable logits. While Top-1 sampling is more expressive, Huijben et al. (2020a) noticed slightly better results for the Top-M regime, possibly thanks to the smaller number of trainable logits, therefore facilitating optimization. For scaleability reasons, we thus choose to continue with Top-M sampling in this work and refer to this regime as DPS in the rest of this paper. We refer the reader to Huijben et al. (2020a) for more details regarding DPS. 3.3 A-DPS: ACTIVE DEEP PROBABILISTIC SUBSAMPLING We have seen how DPS enables the learning of a sampling scheme that selects M out of N samples. However, these samples are selected simultaneously. A-DPS selects its samples in an iterative fashion, separating the logits into I acquisition steps, i.e. φi with i ∈ {1, 2, . . . , I} and I =M . Active acquisition is then achieved by introducing dependency between samples, i.e. the sampling distribution at acquisition step i should depend on the information acquired in previous acquisition steps. To that end, we introduce a context vector ci, that accumulates information from all previous time steps. We then condition the sampling distribution on this context by learning a transformation φ = gκ(c), where gκ(.) is a function that is differentiable with respect to its input and parameters κ. Thus, instead of optimizing the parameters directly (as DPS does), we optimize gκ(c), which we will refer to as the sampling model. The question then arises how to best generate this context from previous samples. Here, we follow the analysis-by-synthesis principle, and let the analysis (the sampling model) depend on the synthesis (the task model). This way, the task model can inform the sampling model what information it needs to achieve its assigned task. The iterative analysis-by-synthesis scheme of A-DPS is formalized as follows: φi = gκ(c i−1), (6) ŝi, ci = fθ(y i, ci−1) = fθ(dφi xi, ci−1). (7) We hypothesize that it would be beneficial to add memory to the context vector, so that it does not contain only information about the previous state of task model fθ(.) but can contain information from even earlier acquisition steps. To this end, we propose to leverage a recurrent neural network structure in the form of a Long Short-Term Memory (LSTM) cell (Hochreiter & Schmidhuber, 1997). This LSTM is integrated into the neural networks as a separate layer. We visualize the architecture of the A-DPS framework in Fig.1 and discuss its computational complexity of A-DPS in Appendix A. 4 EXPERIMENTS To show the applicability of A-DPS on both classification as well as reconstruction tasks we evaluate its performance in two experiments. First, we will compare A-DPS with DPS at different subsampling ratios on an MNIST classification example in Section 4.1. Second, we will compare A-DPS with contemporary CS and deep learning methods on a MRI example in Section 4.2, leveraging the fast MRI knee dataset (Zbontar et al., 2018). 4.1 MNIST Experiment setup Classification performance at different sampling rates was tested on the MNIST database (LeCun et al., 1998), consisting of 70,000 grayscale images of 28 × 28 pixels of handwritten digits between 0 and 9. We keep the original data split of 60,000 training and 10,000 testing examples. We train both DPS top-M and A-DPS to take partial measurements in the pixeldomain at different sampling rates. Reaching back to Fig.1 and equations (6) and (7), DPS top-M sampling only consists of the sampling and task model (fθ(.)) parts. All M samples are selected at the same time and used once by fθ(.) to predict which digit the network is looking at. In the case of A-DPS however, only 1 sample is taken at a time and used as input for fθ(.). Here, fθ(.) also creates a context that is used by the sampling network gκ(.) to select the next sample. A-DPS iterates through this loop M times in order to select all the samples. We keep fθ(.) the same for both DPS and A-DPS. Resulting in the fact that the last iteration of A-DPS is similar to that of DPS top-M (i.e. M samples are selected and fed through fθ(.)). Task model In the classification network fθ(.), all 784 (28 × 28) zero-masked samples are used as input for 4 fully connected layers. The fully connected layers have 784, 256, 128, and 128 nodes, respectively. Moreover, they are all activated by leaky ReLU activation functions with a negative slope of 0.2. The first three layers also have a dropout of 30%. The output of the last fully connected layer is then fed into an LSTM with a hidden size of 128. The output of the LSTM is both used as the context for the sampling network, as well as the input for one final fully connected layer. This final fully connected layer maps the output of the LSTM into the 10 class labels, and is therefore followed by a softmax activation function. In case of A-DPS the output of the LSTM is also used as the input for the sampling network gκ(.). The sampling network consists of two linear layers with output sizes of 256 and 784, respectively. Moreover, after the first layer a leaky ReLU activation function is used with a negative slope of 0.2, and a dropout of 30% is applied. Training details Both sampling strategies were trained to minimize the categorical cross-entropy loss. The temperature parameter was fixed to 2. We again employ SGD with the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function. Training was performed on batches of 256 examples for 50 epochs. Results The resulting accuracy on the test set is shown in Fig. 2a. A-DPS outperforms DPS especially when the actual number of samples taken is very low. It is hypothesized that it is especially important to select those candidate samples that carry a lot of information based on the previous samples for very low data rates. Two examples of the selected sampling masks at r = 4% are displayed in Fig. 2b. Here, it is shown how DPS selects all samples at once, while A-DPS selects them in an iterative fashion, resulting in different sampling patterns for the two examples. To study the effect of the LSTM in this set-up, we perform an ablation study by replacing it with a fully connected layer, resulting in a direct projection of the context vector from the previous state without any further memory. The results of this ablation study are shown in Fig. 3a. Here we can see that for the stricter sampling regimes below 3% the use of an LSTM leads to a higher accuracy. To analyze the sampling patterns created by A-DPS we create a t-SNE plot (Van Der Maaten & Hinton, 2008), which maps the created sampling pattern (for r = 4%) per image in the test set to one dot in a 2D space. In this 2D space t-SNE aims to preserve spatial relationships from the higher dimension, i.e. similar high dimensional vectors get mapped close to together, while dissimilar ones are mapped further apart. The resulting plot is shown in Fig. 3b, where each dot is colored with the ground truth label (digit) of the corresponding image. The clustering in this figure indicates how similar selected sampling patterns are. For example, the sampling patterns for digit zero tend to be dissimilar from all the other sampling patterns, while the digits four, six, and nine are sampled more similarly using A-DPS. To get more insight into the inner workings of A-DPS, we perform a similar t-SNE analysis on the context vector that A-DPS constantly updates after acquisition of a new sample.The result is shown in Fig. 4. Here we show the same scatter plot with two different colorings, namely the current acquisition step and the ground truth label (digit). It can be seen how the context vectors are similar for the first few acquisition step, but as information comes in the context vectors accumulate useful information and branch of into different regions dependent on the ground truth label. 4.2 MRI Experiment setup To show the applicability of A-DPS, we demonstrate its performance on linebased MRI. We make use of the NYU fastMRI database of knee MRI volumes (Zbontar et al., 2018). Only the single-coil measurements were selected, from which the outer slices were removed. The resulting data was split into 8,000 training, 2,000 validation, and 3,000 testing MRI slices. All slices were cropped to the central 208 × 208 pixels and normalized between 0 and 1. The subsampling operation on one of these MRI slices is then performed in k-space (Fourier-space): Y = |FHD FX|, (8) where |.| is the magnitude operator. Moreover,X ∈ RN×N is the fully sampled ground truth image and Y ∈ RN×N is the subsampled image, both in the pixel domain. In this case N is equal to 208. Furthermore, F and FH denote the forward and inverse 2D-Fourier transform, respectively. D ∈ {0, 1}N×N denotes the sampling mask in k-space. Normally Y would be complex, due to the asymmetrical nature of MRI measurements and the incomplete subsampling mask. Here, we choose to take the magnitude of Y to simplify reconstruction. We hypothesize that doing so does not significantly change the problem, as the imaginary part of fully sampled images in the NYU fastMRI dataset is very small compared to the real part. Task model To reconstruct an estimate of the original image X̂ from the partial measurement Y a deep unfolded proximal gradient method is used (Mardani et al., 2018), in which K iterations of a proximal gradient method are unfolded as a feed forward neural network following: X̂(k+1) = P(k)(ζ) { X̂(k) − α(k)(ψ) ( |FHD FX̂(k)| − Y )} , (9) where P(k)(ζ) (.) is a trainable image-to-image proximal mapping and α (k) (ψ) is the step size, parameterized by ζ and ψ, respectively. We implement this proximal gradient method for k = 3 steps, with the trainable step size α(k)(ψ) implemented as a 3 × 3 convolutional layer. Each proximal mapping is implemented as a series of 4 convolutions with 16, 16, 16, and 1 feature(s) each and a kernel size of 3× 3. All convolutions but the last are followed by ReLU activation functions. We will compare A-DPS to to several relevant sampling baselines, namely, random uniform, lowpass, variable density (VDS), greedy mask selection (Sanchez et al., 2020), LOUPE (Bahadir et al., 2019; Bahadir et al., 2020), and DPS. Under a random uniform regime all N lines are equally likely to be sampled, while under a low-pass regime the M lines closest to the DC frequency will be selected. VDS on the other hand is a heuristic regime that employs a probability density from which the desired amount of samples are drawn. Following (Lustig et al., 2007), we here use a polynomial probability density function with a decay factor of 6. For the greedy mask selection we follow the approach by Sanchez et al. (2020) and first optimize the sampling mask using the NESTA solver (Becker et al., 2011). After this, we fix the sampling mask and train our proximal gradient network. Results for both reconstruction algorithms are reported. To generate a sampling maskD using A-DPS we use a sampling network gκ(.). This network takes two inputs. Firstly, the output of the proximal network, i.e. the reconstructed image at that iteration. This image is analyzed using 3 convolutional layers with kernels sizes of 3 × 3 followed by ReLU activation functions. The output features are of sizes 16, 16, and 32, respectively. The final feature map is aggregated into a feature vector using global average pooling. Next to this feature vector, the indices of the selected lines at the previous iteration(s) are also used as input for the sampling network, encoded as a M -hot vector of dimension 208. Both the feature and selected indices vector are concatenated and used as input for an LSTM cell, with output dimension of 208. This is followed by two fully connected layers with 208 neurons each, having a ReLU activation after the first layer. Training details To promote the reconstruction of visually plausible images, we leverage both a Mean Squared Error (MSE) and adversarial loss (Ledig et al., 2016). To that end we introduce a discriminator network that is trained to distinguish between real and reconstructed MR images. The discriminator is implemented using three convolutional layers with kernel sizes of 3× 3, stride 2, and 64 feature maps, each with Leaky ReLU activations. After the last convolutional layer the feature maps are aggregated into a feature vector using global average pooling, with a dropout rate of 40%, which is mapped to a single output probability using one fully connected layer followed by a sigmoid activation function. Next to the MSE loss and adversarial loss, we add a third loss term that penalizes the MSE loss between the discriminator features of real and generated images. The total loss function is a weighted summation of these three losses, with weights 1, 5e−6, and 1e−7, respectively. All sampling mask selection strategies were then trained using SGD on batches of 4 images for a total of 10 epochs. We again employ the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function, and set the temperature parameter to 2. We choose M = 26, which results in a sampling ratio of r = 12.5%, or an acceleration factor of 8. Results We score the different strategies based on 3 metrics: the normalized mean square error (NMSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index (SSIM) (Wang et al., 2004). Averaged results over 5 runs for an acceleration factor of 8 are shown in Table 1, while those for an acceleration factor of 8 are shown in Table 2. We test for the statistical significance of the gains made by A-DPS over DPS in Appendix B. An example of A-DPS reconstruction is shown in Fig. 5, while a comprehensive overview of all baselines for this example can be found in Appendix C and D. To analyse the workings of A-DPS in this MRI setting we first plot the relative occurrence of different line indices over the test set. This is shown in Fig. 6a. We can see that A-DPS always selects a band of lines around 0, with a lower occurence of high frequency lines. We also employ t-SNE on the context vector that A-DPS updates every acquisition step. The result of this is shown in Fig. 6b. It can be seen that untill acquisition step 15 the context vectors are very similar for the images per acquisition step, while after acquisition step 15 the context vectors start fanning out. It is hypothesized that at the start of the sampling procedure not a lot of actionable information is available to the system, but this increases as more samples are taken over time. 5 CONCLUSION We proposed a generalization of DPS, which enables active acquisition, called A-DPS. We demonstrated its applicability on both an MNIST classification task as well as an MRI reconstruction task. Moreover, we found that the adaptive nature of A-DPS improves performance over other sampling pattern selection methods on downstream task performance. We find that A-DPS uses qualitatively differing sampling strategies depending on the context. On a critical note, the black-box nature of A-DPS comes with the traditional machine learning challenges of out-of-distribution generalization and overfitting. This means that in a practical application, the sub-sampling regime could obfuscate the information required to recognize failure cases. Future work includes exploration on how to improve conditioning of the sampling scheme on earlier acquired information and meta-information (such as resolution, sampling ratio, and weighting). Potential future applications include 3D and dynamic MRI, CT, ultrasound, radar, video, and MIMO systems. A COMPUTATIONAL COMPLEXITY One of the drawbacks of A-DPS compared to learned fixed sampling schemes is it higher amount of computational complexity. The main source of this complexity is the unrolling of iterations, leading to a computational complexity of O(I) = O(M/ρ). Although we set ρ equal to 1 in all our experiments, one can in fact seamlessly interpolate between A-DPS and DPS by choosing 1 ≤ ρ ≤ M . This constitutes a trade-off between computational complexity and adaptation rate. We leave further exploration of this trade-off to future work. We can also express computational complexity in terms of run-time on a machine, in our case a GeForce GTX 1080 Ti. A comparison of DPS and A-DPS in terms of training time per epoch can be seen in Fig. 7. We can see that the training time for A-DPS increases for higher sampling ratios where it needs to unroll through more iterations. By combining the results from Fig. 2a and Fig. 7 one can make a trade-off between run-time and accuracy. Where A-DPS achieves higher accuracy for stricter sampling regimes, while at the same time not increasing run-times by a lot. For the MRI experiment the training times per epoch are 2 and 52 minutes, for DPS and A-DPS, respectively. Inference is however fast: A-DPS only requires∼ 13ms of processing time to determine the next-to-acquire K-space line and reconstruct an image after each step. This is well below the shortest reported Time of Echo (TE) for this MRI acquisition, being 27 ms (Zbontar et al., 2018). B STATISTICAL TESTS ON MRI To analyse whether the gains made by A-DPS over DPS are statistically significant we perform twosided paired Student’s t-tests, the results of which are shown in Table 3 and Table 4. We perform two different types of tests. Firstly, we look at the average results for each run after training on the hold out test set. This results in a t-test with n = 5 that conveys how reliably A-DPS outperforms DPS, given a new random initialization of the trainable network weights, and different stochastic optimization behavior. These results are shown in Table 3. Secondly, we perform a t-test over the results on each individual image (averaged over the 5 runs) of the hold-out test set, resulting in n = 3000. This test indicates whether A-DPS’ performance is significantly higher than that of DPS, given a new test image. These results are shown in Table 4.It can be seen how in all cases for all metrics p < 0.05 indicating that our findings are statistically significant. C MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 8 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS D MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 16 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS
1. What is the main contribution of the paper regarding active subsampling? 2. What are the strengths and weaknesses of the proposed method compared to other existing methods? 3. How does the reviewer assess the novelty and impact of the paper's content? 4. What are the limitations of the method, and how does it apply to general sensing? 5. What are some suggestions for improving the presentation and comparisons in the paper?
Review
Review Summary This paper develops methods to perform active subsampling. That is, given some downstream task like classification or image reconstruction, it sequentially selects which elements of an image or signal to sample so as to perform said task. It does so by extending the Deep Probabilistic Subsampling (DPS) method developed by Huijben et al. The proposed method is applied to two problems as well as a simplified, low-resolution MRI reconstruction problem. Strengths Motivation: Active sampling is an interesting idea that has been around for some time, but was often computationally impractical. Thanks to GPUs and deep learning, active sampling is becoming more practical and its interesting to see new work in this direction. Weaknesses Novelty: The method is a small extension to the DPS method where the network that selects which rows to samples is conditioned on the existing measurements. Validation: The paper did not compare to any other active sampling strategies. The authors made no effort to replicate existing methods. Clarity: The Markov chain example in section 4.1 was hard to follow and more distracting than informative. The phrase "the task model gets to sample only one position out of every three" reads as if the model is sampling one position out of every three in the sequence. It took some time before I realized this meant that at every position in the sequence it was probing one of the three states. Impact: The results with active sampling were only marginally better than results with a fixed (learned) sampling strategy. Limitations: The method is applicable only to true subsampling problems, not general sensing. That is, one isn't designing the rows of a measurement matrix on the fly but rather selecting which row from an existing matrix (identity in most of the examples) that one would like to sample from. Recommendation The paper's presentation could be improved and it is sorely missing comparisons to other active sampling methods. I don't think the papers novelty is enough to overcome these issues and so I do not believe it is ready for publication. Comments While the proposed method was computationally impractical, active sampling was discussed extensively in [A] from a information theoretical perspective. [A] Ji, S., Xue, Y., & Carin, L. (2008). Bayesian compressive sensing. IEEE Transactions on signal processing, 56(6), 2346-2356. Because of the nonlinearity in the forward model, equation (9) is not actually proximal gradient descent. I believe there's a sign(F^HD\circFX) term missing from the (sub) gradient. Update I thank the authors for their comprehensive response. While its unfortunate they couldn't compare to any other active methods, the related work and overall clarity of the paper is significantly improved. The t-SNE plots were informative and interesting. While I have reservations about the paper's lack of comparisons, I think its publication still might be a net positive for the research community. I have updated my score. Other comments Let A(X)=F^H D\circ F X. The expression A^H(Ax-Y\circ sign(A(x))) is a subgradient of 1/2|| Y - |A(X)|||^2 but A^H(|Ax|-Y) is not. I would avoid calling (9) projected gradient descent as the "gradient" isn't really a gradient. "We have performed a statistical analysis on the performance gains made by A-DPS over DPS in the MRI reconstruction task, concluding that they are indeed statistically significant." It would be nice to see confidence intervals in Tables 1 and 2. Questions/comments that do not effect the review: Why use an LSTM/any network with memory? It seems the next sample depends on the previous samples, but not their order. The ablation study on pg 6 shows that memory helps (at low sampling rates), but I don't understand the intuition why. Could the LSTM just have more capacity? Typos: pg 2: "cells.During" space pg 3: "However, This" capitalization
ICLR
Title Active Deep Probabilistic Subsampling Abstract Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for all datapoints. We generalize DPS to a sequential method that actively picks the next sample based on the information acquired so far; dubbed Active-DPS (ADPS). We validate that A-DPS improves over DPS for MNIST classification at high subsampling rates. We observe that A-DPS learns to actively adapt based on the previously sampled elements, yielding different sampling sequences across the dataset. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods. 1 INTRODUCTION Present-day technologies produce and consume vast amounts of data, which is typically acquired using an analog-to-digital converter (ADC). The amount of data digitized by an ADC is determined not only by the temporal sampling rate, but also by the manner in which spatial acquisitions are taken, e.g. by using a specific design of sensor arrays. Reducing the number of sample acquisitions needed, can lead to meaningful reductions in scanning time, e.g. in Magnetic Resonance Imaging (MRI), radiation exposure, e.g. in Computed Tomography (CT), battery drain, and bandwidth requirements. While the Nyquist theorem is traditionally used to provide theoretical bounds on the sampling rate, in recent years signal reconstruction from sub-Nyquist sampled data has been achieved through a framework called Compressive Sensing (CS). First proposed by Donoho (2006), and later applied for MRI by Lustig et al. (2007), CS leverages structural signal priors, specifically sparsity under some known transform. By taking compressive measurements followed by iterative optimization of a linear system under said sparsity prior, reconstruction of the original signal is possible while sampling at sub-Nyquist rates. Researchers have employed CS with great success in a wide variety of applications, such as radar (Baraniuk & Steeghs, 2007; Ender, 2010), seismic surveying (Herrmann et al., 2012), spectroscopy (Sanders et al., 2012), and medical imaging (Han et al., 2016; Lai et al., 2016). However, both the need to know the sparsifying basis of the data, and the iterative nature of the reconstruction algorithms, still hamper practical applicability of CS in many situations. These limitations can be overcome by the use of deep learning reconstruction models that make the sparsity assumption implicit, and facilitate non-iterative inference once trained. Moreover, the (typically random) nature of the measurement matrix in CS does, despite adhering to the given assumptions, not necessarily result in an optimal measurement given the underlying data statistics and the downstream system task. This has recently been tackled by algorithms that learn the sampling scheme from a data distribution. In general, these data-driven sampling algorithms can be divided into two categories: algorithms that learn sampling schemes which are fixed once learned (Huijben et al., 2020a;b;c; Ravishankar & Bresler, 2011; Sanchez et al., 2020; Bahadir et al., 2019; Bahadir et al., 2020; Weiss et al., 2019), and algorithms that learn to actively sample (Ji et al., 2008; Zhang et al., 2019; Jin et al., 2019; Pineda et al., 2020; Bakker et al., 2020); selecting new samples based on sequential acquisition of the information. The former type of algorithms learn a sampling scheme that - on average - selects informative samples of all instances originating from the training distribution. However, when this distribution is multi-modal, using one globally optimized sampling scheme, can easily be sub-optimal on instance-level. Active acquisition algorithms deal with such shifts in underlying data statistics by conditioning sampling behavior on previously acquired information from the instance (e.g. the image to be sampled). This results in a sampling sequence that varies across test instances, i.e. sampling is adapted to the new data. This adaptation as a result of conditioning, promises lower achievable sampling rates, or better downstream task performance for the same rate, compared to sampling schemes that operate equivalently on all data. In this work, we extend the Deep Probabilistic Subsampling (DPS) framework (Huijben et al., 2020a) to an active acquisition framework by making the sampling procedure iterative and conditional on the samples already acquired, see Fig. 1. We refer to our method as Active Deep Probabilistic Subsampling (A-DPS). We show how A-DPS clearly exploits the ten different modalities (i.e. the digits) present in the MNIST dataset to adopts instance-adaptive sampling sequences. Moreover, we demonstrate both on MNIST (LeCun et al., 1998) and the real-world fast MRI knee dataset (Zbontar et al., 2018), that A-DPS outperforms other state-of-the-art models for learned sub-Nyquist sampling. We make all code publicly available upon publication, in order to facilitate benchmarking to all provided baselines and A-DPS in future research. 2 RELATED WORK Recently, several techniques for learning a fixed sampling pattern have been proposed, especially in the field of MR imaging, in which Ravishankar & Bresler (2011) were one of the firsts. In this work, the authors make use of non-overlapping cells in k-space, and move samples between these cells.During training Ravishankar & Bresler (2011) alternate between reconstruction and relocation of sampling positions. After a reconstruction step they sort the cells in terms of reconstructing error and an infinite-p norm. Selected samples from lower scoring cells are relocated to higher scoring cells in a greedy fashion. Sanchez et al. (2020) also propose a greedy approach, in which samples are not relocated between cells, but greedily chosen to optimize a reconstruction loss on a batch of examples. Both of the types of greedy optimization do however not allow for joint learning of sampling together with a downstream reconstruction/task model, as the reconstruction has to either be parameter-free or pretrained to work well with a variety of sampling schemes. Bahadir et al. (2019) on the other hand propose to learn the sampling pattern by thresholding pixelbased i.i.d. samples drawn from a uniform distribution, dubbed Learning-based Optimization of the Under-sampling PattErn (LOUPE). The sample rate of LOUPE is indirectly controlled by promoting sparsity through the use of an `1 penalty on the thresholds. One of the first active sampling schemes was proposed by Ji et al. (2008), who leverage CS reconstruction techniques that also give a measure of uncertainty of the reconstruction using Bayesian modeling. Ji et al. (2008) leveraged this uncertainty in the reconstruction to adaptivly select the next measurement that will reduce this uncertainty by the largest amount. However, this method - and other similar works from (Carson et al., 2012; Li et al., 2013) - rely on linearly combined measurements, rather than discrete sampling, with which we concern ourselves here. In the field of MRI, Zhang et al. (2019) propose an active acquisition scheme by leveraging a reconstruction and adversarial neural network. Whereas the reconstruction network is trained to reconstruct MR images from the subsampled Fourier space (k-space), the adversarial network is trained to distinguish between already sampled, and omitted lines in this space. The k-space line that is most believed to be ‘fake’ (i.e. filled in by the reconstruction network) by the adversarial network, is sampled next. However, This framework only works for undersampled Fourier to image reconstruction tasks, as the discriminator requires mappings of the image in k-space. Jin et al. (2019) put forth an active acquisition scheme for MRI by leveraging reinforcement learning (RL). Two neural networks, one for sampling and one for reconstruction are trained jointly using a Monte-Carlo tree search, resulting in a sampling policy that is dependent on the current reconstruction of the image. Concurrently with our work, both Pineda et al. (2020) and Bakker et al. (2020) proposed RL-based active acquisition techniques. Pineda et al. (2020) leverages a Double Deep Q-Network. The model is trained using a modified -greedy policy, in which the best action is taken with probability 1− , and an exploratory action is taken with probability . Bakker et al. (2020) compare greedy with nongreedy training, finding that the greedy method leads to a higher degree of adaptability, especially for tasks with a long horizon (i.e. more samples to be taken). Both of the frameworks proposed by (Pineda et al., 2020) and Bakker et al. (2020) make use of a pretrained reconstruction network, which differs from the proposed A-DPS method that enables joint training of both the reconstruction (task) network and sampling network. Even though subsampling is an extreme form of data compression, we differentiate from typical data compression architectures like deep encoder-decoder structures (Theis et al., 2017; Ballé et al., 2017), as these methods do not reduce data rates at the measurement stage. The feedback recurrent autoencoder proposed by Yang et al. (2020) is however related to A-DPS through its use of a recurrent context. But whereas Yang et al. (2020) learn a recurrent context to inform the encoder stage of the network, A-DPS uses this to inform the sampling pattern. 3 METHOD 3.1 GENERAL FRAMEWORK Given a prediction task s we are interested in learning to predict an optimal subsampling scheme A ∈ {0, 1}M×N (with M N ) on an input signal x ∈ RN , resulting in a measurement ỹ ∈ RM : ỹ = Ax. (1) Each row in A is constrained to have `0-norm of 1, while each column in A is constrained to have an `0-norm of either 0 or 1, i.e. each of the N candidate samples is selected at most once. In the rest of this paper we will index these candidate samples with n ∈ {1, . . . , N}, and the selected samples with m ∈ {1, . . . ,M}. The percentage of selected samples from the candidate samples is called the sampling ratio r =M/N · 100%. We also introduce a non-compressed form of the measurement ỹ, called y ∈ RN , that contains N −M zeros, and M non-zeros at the sampled indices specified by A, i.e. the masked input. This way, the location of samples from x is preserved, which is especially useful whenA changes during training. To acquire y from x, one seeks a subsampling mask d that can be applied on x via: y = d x = ATAx, (2) where denotes an element-wise multiplication. From the resulting measurement y we then aim at predicting the downstream task s through: ŝ = fθ(y), (3) where fθ(.) is a function that is differentiable with respect to its input and parameters θ, e.g. a neural network. Normally, optimization of the task model fθ(.) is achieved through backpropagation of some loss function L(s, ŝ). However, calculating gradients on the sampling matrix is blocked by its combinatorial nature, inhibiting joint training of the task with the sampling operation. The DPS framework provides a solution to this problem, on which we will elaborate in the next section. 3.2 DPS: DEEP PROBABILISTIC SUBSAMPLING To enable joint training of the sampling operation with the downstream task model, Huijben et al. (2020a) introduce DPS. Rather than optimizing A directly, they propose to optimize a generative sampling model P (A|φ), whereφ are learned unnormalized logits of (possibly multiple) categorical distribution(s). Each distribution expresses the probabilities for sampling any of the elements xn fromx through sampling matrixA. More specifically, φm,n is the log-probability for setting am,n = 1, and thus sampling xn as mth sample. To generate a sampling pattern from these unnormalized logits, i.e. implementation of this conditional model, the Gumbel-max trick is leveraged (Gumbel, 1954). In the Gumbel-max trick the unnormalized logits are perturbed with i.i.d. Gumbel noise samples em,n ∼ Gumbel(0, 1). By selecting the maximum of this perturbation a realization of the sampling mask can be found using: Am,: = one-hotN { argmax n {wm−1,n + φm,n + em,n} } , (4) where Am,: denotes the m-th row of A and one-hotN creates a one-hot vector of length N , with the one at the index specified by the argmax operator. Moreover, the cumulative mask wm−1,n ∈ {−∞, 0} masks previously selected samples by adding minus infinity to those logits, thereby ensuring sampling without replacement. During backpropagation, gradients are computed by relaxing this sampling procedure using the Gumbel-softmax trick (Jang et al., 2016; Maddison et al., 2017), resulting in: ∇φmAm,: := ∇φmEem [softmaxτ {wm−1,n + φm,n + em,n}] , (5) where τ denotes the temperature parameter of the softmax operator. Setting τ > 0 results in a smoothed sampling matrix A (i.e. elements can have values between 0 and 1 as well), allowing gradients to distribute over multiple logits during training. In the limit of τ → 0 the softmax operator approaches the one-hot argmax function of equation (4). Although this approach – also known as straight-through Gumbel-softmax – leads to biased gradients, it has been shown to work well in practice, and Huijben et al. (2020a) keep τ at a fixed value during training. Huijben et al. (2020a) propose two regimes of DPS. First, Top-1 sampling, an expressive form of DPS where each of the M selected samples are separately conditioned on all N candidate samples, resulting in M ×N trainable logits φm,n. Second, Top-M sampling (called Top-K in their paper), a constrained form where all M samples together are conditioned on all N candidate samples, i.e. the logits φn are shared between the M rows of A, resulting in only N trainable logits. While Top-1 sampling is more expressive, Huijben et al. (2020a) noticed slightly better results for the Top-M regime, possibly thanks to the smaller number of trainable logits, therefore facilitating optimization. For scaleability reasons, we thus choose to continue with Top-M sampling in this work and refer to this regime as DPS in the rest of this paper. We refer the reader to Huijben et al. (2020a) for more details regarding DPS. 3.3 A-DPS: ACTIVE DEEP PROBABILISTIC SUBSAMPLING We have seen how DPS enables the learning of a sampling scheme that selects M out of N samples. However, these samples are selected simultaneously. A-DPS selects its samples in an iterative fashion, separating the logits into I acquisition steps, i.e. φi with i ∈ {1, 2, . . . , I} and I =M . Active acquisition is then achieved by introducing dependency between samples, i.e. the sampling distribution at acquisition step i should depend on the information acquired in previous acquisition steps. To that end, we introduce a context vector ci, that accumulates information from all previous time steps. We then condition the sampling distribution on this context by learning a transformation φ = gκ(c), where gκ(.) is a function that is differentiable with respect to its input and parameters κ. Thus, instead of optimizing the parameters directly (as DPS does), we optimize gκ(c), which we will refer to as the sampling model. The question then arises how to best generate this context from previous samples. Here, we follow the analysis-by-synthesis principle, and let the analysis (the sampling model) depend on the synthesis (the task model). This way, the task model can inform the sampling model what information it needs to achieve its assigned task. The iterative analysis-by-synthesis scheme of A-DPS is formalized as follows: φi = gκ(c i−1), (6) ŝi, ci = fθ(y i, ci−1) = fθ(dφi xi, ci−1). (7) We hypothesize that it would be beneficial to add memory to the context vector, so that it does not contain only information about the previous state of task model fθ(.) but can contain information from even earlier acquisition steps. To this end, we propose to leverage a recurrent neural network structure in the form of a Long Short-Term Memory (LSTM) cell (Hochreiter & Schmidhuber, 1997). This LSTM is integrated into the neural networks as a separate layer. We visualize the architecture of the A-DPS framework in Fig.1 and discuss its computational complexity of A-DPS in Appendix A. 4 EXPERIMENTS To show the applicability of A-DPS on both classification as well as reconstruction tasks we evaluate its performance in two experiments. First, we will compare A-DPS with DPS at different subsampling ratios on an MNIST classification example in Section 4.1. Second, we will compare A-DPS with contemporary CS and deep learning methods on a MRI example in Section 4.2, leveraging the fast MRI knee dataset (Zbontar et al., 2018). 4.1 MNIST Experiment setup Classification performance at different sampling rates was tested on the MNIST database (LeCun et al., 1998), consisting of 70,000 grayscale images of 28 × 28 pixels of handwritten digits between 0 and 9. We keep the original data split of 60,000 training and 10,000 testing examples. We train both DPS top-M and A-DPS to take partial measurements in the pixeldomain at different sampling rates. Reaching back to Fig.1 and equations (6) and (7), DPS top-M sampling only consists of the sampling and task model (fθ(.)) parts. All M samples are selected at the same time and used once by fθ(.) to predict which digit the network is looking at. In the case of A-DPS however, only 1 sample is taken at a time and used as input for fθ(.). Here, fθ(.) also creates a context that is used by the sampling network gκ(.) to select the next sample. A-DPS iterates through this loop M times in order to select all the samples. We keep fθ(.) the same for both DPS and A-DPS. Resulting in the fact that the last iteration of A-DPS is similar to that of DPS top-M (i.e. M samples are selected and fed through fθ(.)). Task model In the classification network fθ(.), all 784 (28 × 28) zero-masked samples are used as input for 4 fully connected layers. The fully connected layers have 784, 256, 128, and 128 nodes, respectively. Moreover, they are all activated by leaky ReLU activation functions with a negative slope of 0.2. The first three layers also have a dropout of 30%. The output of the last fully connected layer is then fed into an LSTM with a hidden size of 128. The output of the LSTM is both used as the context for the sampling network, as well as the input for one final fully connected layer. This final fully connected layer maps the output of the LSTM into the 10 class labels, and is therefore followed by a softmax activation function. In case of A-DPS the output of the LSTM is also used as the input for the sampling network gκ(.). The sampling network consists of two linear layers with output sizes of 256 and 784, respectively. Moreover, after the first layer a leaky ReLU activation function is used with a negative slope of 0.2, and a dropout of 30% is applied. Training details Both sampling strategies were trained to minimize the categorical cross-entropy loss. The temperature parameter was fixed to 2. We again employ SGD with the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function. Training was performed on batches of 256 examples for 50 epochs. Results The resulting accuracy on the test set is shown in Fig. 2a. A-DPS outperforms DPS especially when the actual number of samples taken is very low. It is hypothesized that it is especially important to select those candidate samples that carry a lot of information based on the previous samples for very low data rates. Two examples of the selected sampling masks at r = 4% are displayed in Fig. 2b. Here, it is shown how DPS selects all samples at once, while A-DPS selects them in an iterative fashion, resulting in different sampling patterns for the two examples. To study the effect of the LSTM in this set-up, we perform an ablation study by replacing it with a fully connected layer, resulting in a direct projection of the context vector from the previous state without any further memory. The results of this ablation study are shown in Fig. 3a. Here we can see that for the stricter sampling regimes below 3% the use of an LSTM leads to a higher accuracy. To analyze the sampling patterns created by A-DPS we create a t-SNE plot (Van Der Maaten & Hinton, 2008), which maps the created sampling pattern (for r = 4%) per image in the test set to one dot in a 2D space. In this 2D space t-SNE aims to preserve spatial relationships from the higher dimension, i.e. similar high dimensional vectors get mapped close to together, while dissimilar ones are mapped further apart. The resulting plot is shown in Fig. 3b, where each dot is colored with the ground truth label (digit) of the corresponding image. The clustering in this figure indicates how similar selected sampling patterns are. For example, the sampling patterns for digit zero tend to be dissimilar from all the other sampling patterns, while the digits four, six, and nine are sampled more similarly using A-DPS. To get more insight into the inner workings of A-DPS, we perform a similar t-SNE analysis on the context vector that A-DPS constantly updates after acquisition of a new sample.The result is shown in Fig. 4. Here we show the same scatter plot with two different colorings, namely the current acquisition step and the ground truth label (digit). It can be seen how the context vectors are similar for the first few acquisition step, but as information comes in the context vectors accumulate useful information and branch of into different regions dependent on the ground truth label. 4.2 MRI Experiment setup To show the applicability of A-DPS, we demonstrate its performance on linebased MRI. We make use of the NYU fastMRI database of knee MRI volumes (Zbontar et al., 2018). Only the single-coil measurements were selected, from which the outer slices were removed. The resulting data was split into 8,000 training, 2,000 validation, and 3,000 testing MRI slices. All slices were cropped to the central 208 × 208 pixels and normalized between 0 and 1. The subsampling operation on one of these MRI slices is then performed in k-space (Fourier-space): Y = |FHD FX|, (8) where |.| is the magnitude operator. Moreover,X ∈ RN×N is the fully sampled ground truth image and Y ∈ RN×N is the subsampled image, both in the pixel domain. In this case N is equal to 208. Furthermore, F and FH denote the forward and inverse 2D-Fourier transform, respectively. D ∈ {0, 1}N×N denotes the sampling mask in k-space. Normally Y would be complex, due to the asymmetrical nature of MRI measurements and the incomplete subsampling mask. Here, we choose to take the magnitude of Y to simplify reconstruction. We hypothesize that doing so does not significantly change the problem, as the imaginary part of fully sampled images in the NYU fastMRI dataset is very small compared to the real part. Task model To reconstruct an estimate of the original image X̂ from the partial measurement Y a deep unfolded proximal gradient method is used (Mardani et al., 2018), in which K iterations of a proximal gradient method are unfolded as a feed forward neural network following: X̂(k+1) = P(k)(ζ) { X̂(k) − α(k)(ψ) ( |FHD FX̂(k)| − Y )} , (9) where P(k)(ζ) (.) is a trainable image-to-image proximal mapping and α (k) (ψ) is the step size, parameterized by ζ and ψ, respectively. We implement this proximal gradient method for k = 3 steps, with the trainable step size α(k)(ψ) implemented as a 3 × 3 convolutional layer. Each proximal mapping is implemented as a series of 4 convolutions with 16, 16, 16, and 1 feature(s) each and a kernel size of 3× 3. All convolutions but the last are followed by ReLU activation functions. We will compare A-DPS to to several relevant sampling baselines, namely, random uniform, lowpass, variable density (VDS), greedy mask selection (Sanchez et al., 2020), LOUPE (Bahadir et al., 2019; Bahadir et al., 2020), and DPS. Under a random uniform regime all N lines are equally likely to be sampled, while under a low-pass regime the M lines closest to the DC frequency will be selected. VDS on the other hand is a heuristic regime that employs a probability density from which the desired amount of samples are drawn. Following (Lustig et al., 2007), we here use a polynomial probability density function with a decay factor of 6. For the greedy mask selection we follow the approach by Sanchez et al. (2020) and first optimize the sampling mask using the NESTA solver (Becker et al., 2011). After this, we fix the sampling mask and train our proximal gradient network. Results for both reconstruction algorithms are reported. To generate a sampling maskD using A-DPS we use a sampling network gκ(.). This network takes two inputs. Firstly, the output of the proximal network, i.e. the reconstructed image at that iteration. This image is analyzed using 3 convolutional layers with kernels sizes of 3 × 3 followed by ReLU activation functions. The output features are of sizes 16, 16, and 32, respectively. The final feature map is aggregated into a feature vector using global average pooling. Next to this feature vector, the indices of the selected lines at the previous iteration(s) are also used as input for the sampling network, encoded as a M -hot vector of dimension 208. Both the feature and selected indices vector are concatenated and used as input for an LSTM cell, with output dimension of 208. This is followed by two fully connected layers with 208 neurons each, having a ReLU activation after the first layer. Training details To promote the reconstruction of visually plausible images, we leverage both a Mean Squared Error (MSE) and adversarial loss (Ledig et al., 2016). To that end we introduce a discriminator network that is trained to distinguish between real and reconstructed MR images. The discriminator is implemented using three convolutional layers with kernel sizes of 3× 3, stride 2, and 64 feature maps, each with Leaky ReLU activations. After the last convolutional layer the feature maps are aggregated into a feature vector using global average pooling, with a dropout rate of 40%, which is mapped to a single output probability using one fully connected layer followed by a sigmoid activation function. Next to the MSE loss and adversarial loss, we add a third loss term that penalizes the MSE loss between the discriminator features of real and generated images. The total loss function is a weighted summation of these three losses, with weights 1, 5e−6, and 1e−7, respectively. All sampling mask selection strategies were then trained using SGD on batches of 4 images for a total of 10 epochs. We again employ the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function, and set the temperature parameter to 2. We choose M = 26, which results in a sampling ratio of r = 12.5%, or an acceleration factor of 8. Results We score the different strategies based on 3 metrics: the normalized mean square error (NMSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index (SSIM) (Wang et al., 2004). Averaged results over 5 runs for an acceleration factor of 8 are shown in Table 1, while those for an acceleration factor of 8 are shown in Table 2. We test for the statistical significance of the gains made by A-DPS over DPS in Appendix B. An example of A-DPS reconstruction is shown in Fig. 5, while a comprehensive overview of all baselines for this example can be found in Appendix C and D. To analyse the workings of A-DPS in this MRI setting we first plot the relative occurrence of different line indices over the test set. This is shown in Fig. 6a. We can see that A-DPS always selects a band of lines around 0, with a lower occurence of high frequency lines. We also employ t-SNE on the context vector that A-DPS updates every acquisition step. The result of this is shown in Fig. 6b. It can be seen that untill acquisition step 15 the context vectors are very similar for the images per acquisition step, while after acquisition step 15 the context vectors start fanning out. It is hypothesized that at the start of the sampling procedure not a lot of actionable information is available to the system, but this increases as more samples are taken over time. 5 CONCLUSION We proposed a generalization of DPS, which enables active acquisition, called A-DPS. We demonstrated its applicability on both an MNIST classification task as well as an MRI reconstruction task. Moreover, we found that the adaptive nature of A-DPS improves performance over other sampling pattern selection methods on downstream task performance. We find that A-DPS uses qualitatively differing sampling strategies depending on the context. On a critical note, the black-box nature of A-DPS comes with the traditional machine learning challenges of out-of-distribution generalization and overfitting. This means that in a practical application, the sub-sampling regime could obfuscate the information required to recognize failure cases. Future work includes exploration on how to improve conditioning of the sampling scheme on earlier acquired information and meta-information (such as resolution, sampling ratio, and weighting). Potential future applications include 3D and dynamic MRI, CT, ultrasound, radar, video, and MIMO systems. A COMPUTATIONAL COMPLEXITY One of the drawbacks of A-DPS compared to learned fixed sampling schemes is it higher amount of computational complexity. The main source of this complexity is the unrolling of iterations, leading to a computational complexity of O(I) = O(M/ρ). Although we set ρ equal to 1 in all our experiments, one can in fact seamlessly interpolate between A-DPS and DPS by choosing 1 ≤ ρ ≤ M . This constitutes a trade-off between computational complexity and adaptation rate. We leave further exploration of this trade-off to future work. We can also express computational complexity in terms of run-time on a machine, in our case a GeForce GTX 1080 Ti. A comparison of DPS and A-DPS in terms of training time per epoch can be seen in Fig. 7. We can see that the training time for A-DPS increases for higher sampling ratios where it needs to unroll through more iterations. By combining the results from Fig. 2a and Fig. 7 one can make a trade-off between run-time and accuracy. Where A-DPS achieves higher accuracy for stricter sampling regimes, while at the same time not increasing run-times by a lot. For the MRI experiment the training times per epoch are 2 and 52 minutes, for DPS and A-DPS, respectively. Inference is however fast: A-DPS only requires∼ 13ms of processing time to determine the next-to-acquire K-space line and reconstruct an image after each step. This is well below the shortest reported Time of Echo (TE) for this MRI acquisition, being 27 ms (Zbontar et al., 2018). B STATISTICAL TESTS ON MRI To analyse whether the gains made by A-DPS over DPS are statistically significant we perform twosided paired Student’s t-tests, the results of which are shown in Table 3 and Table 4. We perform two different types of tests. Firstly, we look at the average results for each run after training on the hold out test set. This results in a t-test with n = 5 that conveys how reliably A-DPS outperforms DPS, given a new random initialization of the trainable network weights, and different stochastic optimization behavior. These results are shown in Table 3. Secondly, we perform a t-test over the results on each individual image (averaged over the 5 runs) of the hold-out test set, resulting in n = 3000. This test indicates whether A-DPS’ performance is significantly higher than that of DPS, given a new test image. These results are shown in Table 4.It can be seen how in all cases for all metrics p < 0.05 indicating that our findings are statistically significant. C MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 8 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS D MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 16 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS
1. What is the main contribution of the paper in the field of compressed sensing? 2. How does the proposed approach, Active DPS, differ from the existing method, DPS? 3. What are the strengths and weaknesses of the proposed approach compared to other compressed sensing algorithms? 4. Do you have any questions regarding the experimental results or the lack thereof? 5. How does the reviewer assess the significance and impact of the paper's contributions?
Review
Review In this paper, the authors consider the problem of compressed sensing where the underlying signal of interest is captured and restored based only on sparse measurements: Specifically, this paper focuses on the scenario of Deep Probabilistic Subsampling (DPS) which finds sparse measurements in the way that the models designed to solve specific learning problems based on these measurements are jointly optimized. The authors extend DPS to a sequential framework that iteratively and actively selects the next measurement points: The proposed approach encodes the information accumulated until a time step into a context vector which is updated, and used in selecting the next point, in an LSTM-like framework (see minor comments below). In the experiments with two toy problems (including MNIST) and an MRI reconstruction problem, the authors demonstrated that the proposed Active DPS (ADPS) outperforms DPS (in toy problems) and three other compressed sensing algorithms (for MRI reconstruction). I think this paper makes a borderline case: DPS provides a framework that combines the compressed sensing part (sparse data acquisition) and the subsequent learning part in an end-to-end manner. This paper contributes by extending DPS into an active/sequential learning framework achieving significant performance gains over DPS (mainly on toy problems. see minor comments below). On the other hand, the proposed approach appears to be incremental: ADPS adds a simple sequential update structure (of a context vector) to DPS, which can be described by only two equations (6 and 7). The simplicity of the changes proposed (over DPS) is not a limitation, but it could be accompanied by an in-depth theoretical analysis, a convincing qualitative discussion or extensive experiments demonstrating the practical relevance of the proposed approach. Minor comments Apart from the last one paragraph, the Introduction Section focuses on discussing the context and motivation of Deep Probabilistic Subsampling (DPS). Instead, the authors could use this space to describe and characterize the proposed Active DPS in detail. I was not sure why the proposed architecture (Figure 1 and equations 6 and 7) is called LSTM, it has a recurrent network structure but I was not able to find any attention (gating) mechanism that characterizes LSTM. Please advise me if I missed anything. Please test if the improvements gained by ADPS over DPS on MRI reconstruction are statistically significant. Update: Thank the authors for their responses, clarification, and additional experiments. I read through authors’ responses and the comments from the other reviewers. I still think this paper makes a borderline case for 1) its technical contribution on extending DPS and thereby achieving significant performance gain on a toy problem and MRI reconstruction tasks, still 2) with limited novelty and room for a more extensive experimental validation (perhaps, beyond MRI). My other concerns on clarity and significance of experiments have been addressed. I would raise my rating to marginally above acceptance threshold (borderline).
ICLR
Title Active Deep Probabilistic Subsampling Abstract Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for all datapoints. We generalize DPS to a sequential method that actively picks the next sample based on the information acquired so far; dubbed Active-DPS (ADPS). We validate that A-DPS improves over DPS for MNIST classification at high subsampling rates. We observe that A-DPS learns to actively adapt based on the previously sampled elements, yielding different sampling sequences across the dataset. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods. 1 INTRODUCTION Present-day technologies produce and consume vast amounts of data, which is typically acquired using an analog-to-digital converter (ADC). The amount of data digitized by an ADC is determined not only by the temporal sampling rate, but also by the manner in which spatial acquisitions are taken, e.g. by using a specific design of sensor arrays. Reducing the number of sample acquisitions needed, can lead to meaningful reductions in scanning time, e.g. in Magnetic Resonance Imaging (MRI), radiation exposure, e.g. in Computed Tomography (CT), battery drain, and bandwidth requirements. While the Nyquist theorem is traditionally used to provide theoretical bounds on the sampling rate, in recent years signal reconstruction from sub-Nyquist sampled data has been achieved through a framework called Compressive Sensing (CS). First proposed by Donoho (2006), and later applied for MRI by Lustig et al. (2007), CS leverages structural signal priors, specifically sparsity under some known transform. By taking compressive measurements followed by iterative optimization of a linear system under said sparsity prior, reconstruction of the original signal is possible while sampling at sub-Nyquist rates. Researchers have employed CS with great success in a wide variety of applications, such as radar (Baraniuk & Steeghs, 2007; Ender, 2010), seismic surveying (Herrmann et al., 2012), spectroscopy (Sanders et al., 2012), and medical imaging (Han et al., 2016; Lai et al., 2016). However, both the need to know the sparsifying basis of the data, and the iterative nature of the reconstruction algorithms, still hamper practical applicability of CS in many situations. These limitations can be overcome by the use of deep learning reconstruction models that make the sparsity assumption implicit, and facilitate non-iterative inference once trained. Moreover, the (typically random) nature of the measurement matrix in CS does, despite adhering to the given assumptions, not necessarily result in an optimal measurement given the underlying data statistics and the downstream system task. This has recently been tackled by algorithms that learn the sampling scheme from a data distribution. In general, these data-driven sampling algorithms can be divided into two categories: algorithms that learn sampling schemes which are fixed once learned (Huijben et al., 2020a;b;c; Ravishankar & Bresler, 2011; Sanchez et al., 2020; Bahadir et al., 2019; Bahadir et al., 2020; Weiss et al., 2019), and algorithms that learn to actively sample (Ji et al., 2008; Zhang et al., 2019; Jin et al., 2019; Pineda et al., 2020; Bakker et al., 2020); selecting new samples based on sequential acquisition of the information. The former type of algorithms learn a sampling scheme that - on average - selects informative samples of all instances originating from the training distribution. However, when this distribution is multi-modal, using one globally optimized sampling scheme, can easily be sub-optimal on instance-level. Active acquisition algorithms deal with such shifts in underlying data statistics by conditioning sampling behavior on previously acquired information from the instance (e.g. the image to be sampled). This results in a sampling sequence that varies across test instances, i.e. sampling is adapted to the new data. This adaptation as a result of conditioning, promises lower achievable sampling rates, or better downstream task performance for the same rate, compared to sampling schemes that operate equivalently on all data. In this work, we extend the Deep Probabilistic Subsampling (DPS) framework (Huijben et al., 2020a) to an active acquisition framework by making the sampling procedure iterative and conditional on the samples already acquired, see Fig. 1. We refer to our method as Active Deep Probabilistic Subsampling (A-DPS). We show how A-DPS clearly exploits the ten different modalities (i.e. the digits) present in the MNIST dataset to adopts instance-adaptive sampling sequences. Moreover, we demonstrate both on MNIST (LeCun et al., 1998) and the real-world fast MRI knee dataset (Zbontar et al., 2018), that A-DPS outperforms other state-of-the-art models for learned sub-Nyquist sampling. We make all code publicly available upon publication, in order to facilitate benchmarking to all provided baselines and A-DPS in future research. 2 RELATED WORK Recently, several techniques for learning a fixed sampling pattern have been proposed, especially in the field of MR imaging, in which Ravishankar & Bresler (2011) were one of the firsts. In this work, the authors make use of non-overlapping cells in k-space, and move samples between these cells.During training Ravishankar & Bresler (2011) alternate between reconstruction and relocation of sampling positions. After a reconstruction step they sort the cells in terms of reconstructing error and an infinite-p norm. Selected samples from lower scoring cells are relocated to higher scoring cells in a greedy fashion. Sanchez et al. (2020) also propose a greedy approach, in which samples are not relocated between cells, but greedily chosen to optimize a reconstruction loss on a batch of examples. Both of the types of greedy optimization do however not allow for joint learning of sampling together with a downstream reconstruction/task model, as the reconstruction has to either be parameter-free or pretrained to work well with a variety of sampling schemes. Bahadir et al. (2019) on the other hand propose to learn the sampling pattern by thresholding pixelbased i.i.d. samples drawn from a uniform distribution, dubbed Learning-based Optimization of the Under-sampling PattErn (LOUPE). The sample rate of LOUPE is indirectly controlled by promoting sparsity through the use of an `1 penalty on the thresholds. One of the first active sampling schemes was proposed by Ji et al. (2008), who leverage CS reconstruction techniques that also give a measure of uncertainty of the reconstruction using Bayesian modeling. Ji et al. (2008) leveraged this uncertainty in the reconstruction to adaptivly select the next measurement that will reduce this uncertainty by the largest amount. However, this method - and other similar works from (Carson et al., 2012; Li et al., 2013) - rely on linearly combined measurements, rather than discrete sampling, with which we concern ourselves here. In the field of MRI, Zhang et al. (2019) propose an active acquisition scheme by leveraging a reconstruction and adversarial neural network. Whereas the reconstruction network is trained to reconstruct MR images from the subsampled Fourier space (k-space), the adversarial network is trained to distinguish between already sampled, and omitted lines in this space. The k-space line that is most believed to be ‘fake’ (i.e. filled in by the reconstruction network) by the adversarial network, is sampled next. However, This framework only works for undersampled Fourier to image reconstruction tasks, as the discriminator requires mappings of the image in k-space. Jin et al. (2019) put forth an active acquisition scheme for MRI by leveraging reinforcement learning (RL). Two neural networks, one for sampling and one for reconstruction are trained jointly using a Monte-Carlo tree search, resulting in a sampling policy that is dependent on the current reconstruction of the image. Concurrently with our work, both Pineda et al. (2020) and Bakker et al. (2020) proposed RL-based active acquisition techniques. Pineda et al. (2020) leverages a Double Deep Q-Network. The model is trained using a modified -greedy policy, in which the best action is taken with probability 1− , and an exploratory action is taken with probability . Bakker et al. (2020) compare greedy with nongreedy training, finding that the greedy method leads to a higher degree of adaptability, especially for tasks with a long horizon (i.e. more samples to be taken). Both of the frameworks proposed by (Pineda et al., 2020) and Bakker et al. (2020) make use of a pretrained reconstruction network, which differs from the proposed A-DPS method that enables joint training of both the reconstruction (task) network and sampling network. Even though subsampling is an extreme form of data compression, we differentiate from typical data compression architectures like deep encoder-decoder structures (Theis et al., 2017; Ballé et al., 2017), as these methods do not reduce data rates at the measurement stage. The feedback recurrent autoencoder proposed by Yang et al. (2020) is however related to A-DPS through its use of a recurrent context. But whereas Yang et al. (2020) learn a recurrent context to inform the encoder stage of the network, A-DPS uses this to inform the sampling pattern. 3 METHOD 3.1 GENERAL FRAMEWORK Given a prediction task s we are interested in learning to predict an optimal subsampling scheme A ∈ {0, 1}M×N (with M N ) on an input signal x ∈ RN , resulting in a measurement ỹ ∈ RM : ỹ = Ax. (1) Each row in A is constrained to have `0-norm of 1, while each column in A is constrained to have an `0-norm of either 0 or 1, i.e. each of the N candidate samples is selected at most once. In the rest of this paper we will index these candidate samples with n ∈ {1, . . . , N}, and the selected samples with m ∈ {1, . . . ,M}. The percentage of selected samples from the candidate samples is called the sampling ratio r =M/N · 100%. We also introduce a non-compressed form of the measurement ỹ, called y ∈ RN , that contains N −M zeros, and M non-zeros at the sampled indices specified by A, i.e. the masked input. This way, the location of samples from x is preserved, which is especially useful whenA changes during training. To acquire y from x, one seeks a subsampling mask d that can be applied on x via: y = d x = ATAx, (2) where denotes an element-wise multiplication. From the resulting measurement y we then aim at predicting the downstream task s through: ŝ = fθ(y), (3) where fθ(.) is a function that is differentiable with respect to its input and parameters θ, e.g. a neural network. Normally, optimization of the task model fθ(.) is achieved through backpropagation of some loss function L(s, ŝ). However, calculating gradients on the sampling matrix is blocked by its combinatorial nature, inhibiting joint training of the task with the sampling operation. The DPS framework provides a solution to this problem, on which we will elaborate in the next section. 3.2 DPS: DEEP PROBABILISTIC SUBSAMPLING To enable joint training of the sampling operation with the downstream task model, Huijben et al. (2020a) introduce DPS. Rather than optimizing A directly, they propose to optimize a generative sampling model P (A|φ), whereφ are learned unnormalized logits of (possibly multiple) categorical distribution(s). Each distribution expresses the probabilities for sampling any of the elements xn fromx through sampling matrixA. More specifically, φm,n is the log-probability for setting am,n = 1, and thus sampling xn as mth sample. To generate a sampling pattern from these unnormalized logits, i.e. implementation of this conditional model, the Gumbel-max trick is leveraged (Gumbel, 1954). In the Gumbel-max trick the unnormalized logits are perturbed with i.i.d. Gumbel noise samples em,n ∼ Gumbel(0, 1). By selecting the maximum of this perturbation a realization of the sampling mask can be found using: Am,: = one-hotN { argmax n {wm−1,n + φm,n + em,n} } , (4) where Am,: denotes the m-th row of A and one-hotN creates a one-hot vector of length N , with the one at the index specified by the argmax operator. Moreover, the cumulative mask wm−1,n ∈ {−∞, 0} masks previously selected samples by adding minus infinity to those logits, thereby ensuring sampling without replacement. During backpropagation, gradients are computed by relaxing this sampling procedure using the Gumbel-softmax trick (Jang et al., 2016; Maddison et al., 2017), resulting in: ∇φmAm,: := ∇φmEem [softmaxτ {wm−1,n + φm,n + em,n}] , (5) where τ denotes the temperature parameter of the softmax operator. Setting τ > 0 results in a smoothed sampling matrix A (i.e. elements can have values between 0 and 1 as well), allowing gradients to distribute over multiple logits during training. In the limit of τ → 0 the softmax operator approaches the one-hot argmax function of equation (4). Although this approach – also known as straight-through Gumbel-softmax – leads to biased gradients, it has been shown to work well in practice, and Huijben et al. (2020a) keep τ at a fixed value during training. Huijben et al. (2020a) propose two regimes of DPS. First, Top-1 sampling, an expressive form of DPS where each of the M selected samples are separately conditioned on all N candidate samples, resulting in M ×N trainable logits φm,n. Second, Top-M sampling (called Top-K in their paper), a constrained form where all M samples together are conditioned on all N candidate samples, i.e. the logits φn are shared between the M rows of A, resulting in only N trainable logits. While Top-1 sampling is more expressive, Huijben et al. (2020a) noticed slightly better results for the Top-M regime, possibly thanks to the smaller number of trainable logits, therefore facilitating optimization. For scaleability reasons, we thus choose to continue with Top-M sampling in this work and refer to this regime as DPS in the rest of this paper. We refer the reader to Huijben et al. (2020a) for more details regarding DPS. 3.3 A-DPS: ACTIVE DEEP PROBABILISTIC SUBSAMPLING We have seen how DPS enables the learning of a sampling scheme that selects M out of N samples. However, these samples are selected simultaneously. A-DPS selects its samples in an iterative fashion, separating the logits into I acquisition steps, i.e. φi with i ∈ {1, 2, . . . , I} and I =M . Active acquisition is then achieved by introducing dependency between samples, i.e. the sampling distribution at acquisition step i should depend on the information acquired in previous acquisition steps. To that end, we introduce a context vector ci, that accumulates information from all previous time steps. We then condition the sampling distribution on this context by learning a transformation φ = gκ(c), where gκ(.) is a function that is differentiable with respect to its input and parameters κ. Thus, instead of optimizing the parameters directly (as DPS does), we optimize gκ(c), which we will refer to as the sampling model. The question then arises how to best generate this context from previous samples. Here, we follow the analysis-by-synthesis principle, and let the analysis (the sampling model) depend on the synthesis (the task model). This way, the task model can inform the sampling model what information it needs to achieve its assigned task. The iterative analysis-by-synthesis scheme of A-DPS is formalized as follows: φi = gκ(c i−1), (6) ŝi, ci = fθ(y i, ci−1) = fθ(dφi xi, ci−1). (7) We hypothesize that it would be beneficial to add memory to the context vector, so that it does not contain only information about the previous state of task model fθ(.) but can contain information from even earlier acquisition steps. To this end, we propose to leverage a recurrent neural network structure in the form of a Long Short-Term Memory (LSTM) cell (Hochreiter & Schmidhuber, 1997). This LSTM is integrated into the neural networks as a separate layer. We visualize the architecture of the A-DPS framework in Fig.1 and discuss its computational complexity of A-DPS in Appendix A. 4 EXPERIMENTS To show the applicability of A-DPS on both classification as well as reconstruction tasks we evaluate its performance in two experiments. First, we will compare A-DPS with DPS at different subsampling ratios on an MNIST classification example in Section 4.1. Second, we will compare A-DPS with contemporary CS and deep learning methods on a MRI example in Section 4.2, leveraging the fast MRI knee dataset (Zbontar et al., 2018). 4.1 MNIST Experiment setup Classification performance at different sampling rates was tested on the MNIST database (LeCun et al., 1998), consisting of 70,000 grayscale images of 28 × 28 pixels of handwritten digits between 0 and 9. We keep the original data split of 60,000 training and 10,000 testing examples. We train both DPS top-M and A-DPS to take partial measurements in the pixeldomain at different sampling rates. Reaching back to Fig.1 and equations (6) and (7), DPS top-M sampling only consists of the sampling and task model (fθ(.)) parts. All M samples are selected at the same time and used once by fθ(.) to predict which digit the network is looking at. In the case of A-DPS however, only 1 sample is taken at a time and used as input for fθ(.). Here, fθ(.) also creates a context that is used by the sampling network gκ(.) to select the next sample. A-DPS iterates through this loop M times in order to select all the samples. We keep fθ(.) the same for both DPS and A-DPS. Resulting in the fact that the last iteration of A-DPS is similar to that of DPS top-M (i.e. M samples are selected and fed through fθ(.)). Task model In the classification network fθ(.), all 784 (28 × 28) zero-masked samples are used as input for 4 fully connected layers. The fully connected layers have 784, 256, 128, and 128 nodes, respectively. Moreover, they are all activated by leaky ReLU activation functions with a negative slope of 0.2. The first three layers also have a dropout of 30%. The output of the last fully connected layer is then fed into an LSTM with a hidden size of 128. The output of the LSTM is both used as the context for the sampling network, as well as the input for one final fully connected layer. This final fully connected layer maps the output of the LSTM into the 10 class labels, and is therefore followed by a softmax activation function. In case of A-DPS the output of the LSTM is also used as the input for the sampling network gκ(.). The sampling network consists of two linear layers with output sizes of 256 and 784, respectively. Moreover, after the first layer a leaky ReLU activation function is used with a negative slope of 0.2, and a dropout of 30% is applied. Training details Both sampling strategies were trained to minimize the categorical cross-entropy loss. The temperature parameter was fixed to 2. We again employ SGD with the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function. Training was performed on batches of 256 examples for 50 epochs. Results The resulting accuracy on the test set is shown in Fig. 2a. A-DPS outperforms DPS especially when the actual number of samples taken is very low. It is hypothesized that it is especially important to select those candidate samples that carry a lot of information based on the previous samples for very low data rates. Two examples of the selected sampling masks at r = 4% are displayed in Fig. 2b. Here, it is shown how DPS selects all samples at once, while A-DPS selects them in an iterative fashion, resulting in different sampling patterns for the two examples. To study the effect of the LSTM in this set-up, we perform an ablation study by replacing it with a fully connected layer, resulting in a direct projection of the context vector from the previous state without any further memory. The results of this ablation study are shown in Fig. 3a. Here we can see that for the stricter sampling regimes below 3% the use of an LSTM leads to a higher accuracy. To analyze the sampling patterns created by A-DPS we create a t-SNE plot (Van Der Maaten & Hinton, 2008), which maps the created sampling pattern (for r = 4%) per image in the test set to one dot in a 2D space. In this 2D space t-SNE aims to preserve spatial relationships from the higher dimension, i.e. similar high dimensional vectors get mapped close to together, while dissimilar ones are mapped further apart. The resulting plot is shown in Fig. 3b, where each dot is colored with the ground truth label (digit) of the corresponding image. The clustering in this figure indicates how similar selected sampling patterns are. For example, the sampling patterns for digit zero tend to be dissimilar from all the other sampling patterns, while the digits four, six, and nine are sampled more similarly using A-DPS. To get more insight into the inner workings of A-DPS, we perform a similar t-SNE analysis on the context vector that A-DPS constantly updates after acquisition of a new sample.The result is shown in Fig. 4. Here we show the same scatter plot with two different colorings, namely the current acquisition step and the ground truth label (digit). It can be seen how the context vectors are similar for the first few acquisition step, but as information comes in the context vectors accumulate useful information and branch of into different regions dependent on the ground truth label. 4.2 MRI Experiment setup To show the applicability of A-DPS, we demonstrate its performance on linebased MRI. We make use of the NYU fastMRI database of knee MRI volumes (Zbontar et al., 2018). Only the single-coil measurements were selected, from which the outer slices were removed. The resulting data was split into 8,000 training, 2,000 validation, and 3,000 testing MRI slices. All slices were cropped to the central 208 × 208 pixels and normalized between 0 and 1. The subsampling operation on one of these MRI slices is then performed in k-space (Fourier-space): Y = |FHD FX|, (8) where |.| is the magnitude operator. Moreover,X ∈ RN×N is the fully sampled ground truth image and Y ∈ RN×N is the subsampled image, both in the pixel domain. In this case N is equal to 208. Furthermore, F and FH denote the forward and inverse 2D-Fourier transform, respectively. D ∈ {0, 1}N×N denotes the sampling mask in k-space. Normally Y would be complex, due to the asymmetrical nature of MRI measurements and the incomplete subsampling mask. Here, we choose to take the magnitude of Y to simplify reconstruction. We hypothesize that doing so does not significantly change the problem, as the imaginary part of fully sampled images in the NYU fastMRI dataset is very small compared to the real part. Task model To reconstruct an estimate of the original image X̂ from the partial measurement Y a deep unfolded proximal gradient method is used (Mardani et al., 2018), in which K iterations of a proximal gradient method are unfolded as a feed forward neural network following: X̂(k+1) = P(k)(ζ) { X̂(k) − α(k)(ψ) ( |FHD FX̂(k)| − Y )} , (9) where P(k)(ζ) (.) is a trainable image-to-image proximal mapping and α (k) (ψ) is the step size, parameterized by ζ and ψ, respectively. We implement this proximal gradient method for k = 3 steps, with the trainable step size α(k)(ψ) implemented as a 3 × 3 convolutional layer. Each proximal mapping is implemented as a series of 4 convolutions with 16, 16, 16, and 1 feature(s) each and a kernel size of 3× 3. All convolutions but the last are followed by ReLU activation functions. We will compare A-DPS to to several relevant sampling baselines, namely, random uniform, lowpass, variable density (VDS), greedy mask selection (Sanchez et al., 2020), LOUPE (Bahadir et al., 2019; Bahadir et al., 2020), and DPS. Under a random uniform regime all N lines are equally likely to be sampled, while under a low-pass regime the M lines closest to the DC frequency will be selected. VDS on the other hand is a heuristic regime that employs a probability density from which the desired amount of samples are drawn. Following (Lustig et al., 2007), we here use a polynomial probability density function with a decay factor of 6. For the greedy mask selection we follow the approach by Sanchez et al. (2020) and first optimize the sampling mask using the NESTA solver (Becker et al., 2011). After this, we fix the sampling mask and train our proximal gradient network. Results for both reconstruction algorithms are reported. To generate a sampling maskD using A-DPS we use a sampling network gκ(.). This network takes two inputs. Firstly, the output of the proximal network, i.e. the reconstructed image at that iteration. This image is analyzed using 3 convolutional layers with kernels sizes of 3 × 3 followed by ReLU activation functions. The output features are of sizes 16, 16, and 32, respectively. The final feature map is aggregated into a feature vector using global average pooling. Next to this feature vector, the indices of the selected lines at the previous iteration(s) are also used as input for the sampling network, encoded as a M -hot vector of dimension 208. Both the feature and selected indices vector are concatenated and used as input for an LSTM cell, with output dimension of 208. This is followed by two fully connected layers with 208 neurons each, having a ReLU activation after the first layer. Training details To promote the reconstruction of visually plausible images, we leverage both a Mean Squared Error (MSE) and adversarial loss (Ledig et al., 2016). To that end we introduce a discriminator network that is trained to distinguish between real and reconstructed MR images. The discriminator is implemented using three convolutional layers with kernel sizes of 3× 3, stride 2, and 64 feature maps, each with Leaky ReLU activations. After the last convolutional layer the feature maps are aggregated into a feature vector using global average pooling, with a dropout rate of 40%, which is mapped to a single output probability using one fully connected layer followed by a sigmoid activation function. Next to the MSE loss and adversarial loss, we add a third loss term that penalizes the MSE loss between the discriminator features of real and generated images. The total loss function is a weighted summation of these three losses, with weights 1, 5e−6, and 1e−7, respectively. All sampling mask selection strategies were then trained using SGD on batches of 4 images for a total of 10 epochs. We again employ the Adam solver (lr = 2e − 4, β1 = 0.9, β2 = 0.999, and = 1e − 7) to minimize the loss function, and set the temperature parameter to 2. We choose M = 26, which results in a sampling ratio of r = 12.5%, or an acceleration factor of 8. Results We score the different strategies based on 3 metrics: the normalized mean square error (NMSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index (SSIM) (Wang et al., 2004). Averaged results over 5 runs for an acceleration factor of 8 are shown in Table 1, while those for an acceleration factor of 8 are shown in Table 2. We test for the statistical significance of the gains made by A-DPS over DPS in Appendix B. An example of A-DPS reconstruction is shown in Fig. 5, while a comprehensive overview of all baselines for this example can be found in Appendix C and D. To analyse the workings of A-DPS in this MRI setting we first plot the relative occurrence of different line indices over the test set. This is shown in Fig. 6a. We can see that A-DPS always selects a band of lines around 0, with a lower occurence of high frequency lines. We also employ t-SNE on the context vector that A-DPS updates every acquisition step. The result of this is shown in Fig. 6b. It can be seen that untill acquisition step 15 the context vectors are very similar for the images per acquisition step, while after acquisition step 15 the context vectors start fanning out. It is hypothesized that at the start of the sampling procedure not a lot of actionable information is available to the system, but this increases as more samples are taken over time. 5 CONCLUSION We proposed a generalization of DPS, which enables active acquisition, called A-DPS. We demonstrated its applicability on both an MNIST classification task as well as an MRI reconstruction task. Moreover, we found that the adaptive nature of A-DPS improves performance over other sampling pattern selection methods on downstream task performance. We find that A-DPS uses qualitatively differing sampling strategies depending on the context. On a critical note, the black-box nature of A-DPS comes with the traditional machine learning challenges of out-of-distribution generalization and overfitting. This means that in a practical application, the sub-sampling regime could obfuscate the information required to recognize failure cases. Future work includes exploration on how to improve conditioning of the sampling scheme on earlier acquired information and meta-information (such as resolution, sampling ratio, and weighting). Potential future applications include 3D and dynamic MRI, CT, ultrasound, radar, video, and MIMO systems. A COMPUTATIONAL COMPLEXITY One of the drawbacks of A-DPS compared to learned fixed sampling schemes is it higher amount of computational complexity. The main source of this complexity is the unrolling of iterations, leading to a computational complexity of O(I) = O(M/ρ). Although we set ρ equal to 1 in all our experiments, one can in fact seamlessly interpolate between A-DPS and DPS by choosing 1 ≤ ρ ≤ M . This constitutes a trade-off between computational complexity and adaptation rate. We leave further exploration of this trade-off to future work. We can also express computational complexity in terms of run-time on a machine, in our case a GeForce GTX 1080 Ti. A comparison of DPS and A-DPS in terms of training time per epoch can be seen in Fig. 7. We can see that the training time for A-DPS increases for higher sampling ratios where it needs to unroll through more iterations. By combining the results from Fig. 2a and Fig. 7 one can make a trade-off between run-time and accuracy. Where A-DPS achieves higher accuracy for stricter sampling regimes, while at the same time not increasing run-times by a lot. For the MRI experiment the training times per epoch are 2 and 52 minutes, for DPS and A-DPS, respectively. Inference is however fast: A-DPS only requires∼ 13ms of processing time to determine the next-to-acquire K-space line and reconstruct an image after each step. This is well below the shortest reported Time of Echo (TE) for this MRI acquisition, being 27 ms (Zbontar et al., 2018). B STATISTICAL TESTS ON MRI To analyse whether the gains made by A-DPS over DPS are statistically significant we perform twosided paired Student’s t-tests, the results of which are shown in Table 3 and Table 4. We perform two different types of tests. Firstly, we look at the average results for each run after training on the hold out test set. This results in a t-test with n = 5 that conveys how reliably A-DPS outperforms DPS, given a new random initialization of the trainable network weights, and different stochastic optimization behavior. These results are shown in Table 3. Secondly, we perform a t-test over the results on each individual image (averaged over the 5 runs) of the hold-out test set, resulting in n = 3000. This test indicates whether A-DPS’ performance is significantly higher than that of DPS, given a new test image. These results are shown in Table 4.It can be seen how in all cases for all metrics p < 0.05 indicating that our findings are statistically significant. C MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 8 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS D MRI RECONSTRUCTION EXAMPLES FOR ACCELERATION FACTOR 16 Method Sampling Mask Reconstruction Error Map Ground Truth Random Uniform Low Pass Variable Density Greedy Mask Selection LOUPE DPS A-DPS
1. What is the focus and contribution of the paper on compressed sensing? 2. What are the strengths of the proposed approach, particularly in its adaptive sampling strategy? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works? 4. Do you have any questions regarding the experiments or results presented in the paper? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review SUMMARY: The paper at hand deals with compressed sensing (CS) and introduces an extension to deep probabilistic subsampling (DPS) called active deep probabilistic subsampling (A-DPS): instead of learning a sampling pattern that is equal for each element of the dataset, A-DPS adaptively selects entries (of each element) based on the information acquired so far. It is shown that this active sampling increases performance for different tasks: a toy example that aims to demonstrate the benefits of active sampling, a classification task (from subsampled inputs) on the MNIST dataset and a reconstuction task on the NYU fastMRI database of knee MRI volumes. STRENGTHS: The paper is very clearly written and very comprehensible. Furthermore, it is very detailed about the experimental setup. I also liked the description of the general framework which thoroughly defines the used notation. The idea is well motivated and the approach of selecting samples depending on the previously selected ones makes intuitively sense. The results of the experiments on MNIST and the NYU fastMRI data are promising. A plenthora of (non-active) subsampling schemes are benchmarked as well. WEAKNESSES: The greatest weakness of this paper is the missing comparison to other active sub-sampling schemes (Zhang et al., 2019; Jin et al., 2019). It would be nice to see wether the proposed method produces better results than the existing methods. I found the toy example very constructed. It is not really easy to understand and does in my opinion not improve the quality of the paper. QUESTIONS: What happens when the MNIST sampling ratio in Figure 3a is further increased? Does A-DPS consistently outperform DPS in low sampling ratio regimes? DECISION: Overall, the paper presents an interesting and novel approach. However, it remains an open question wether the proposed A-DPS scheme performs better than already existing active subsampling schemes. Besides this, the experimental evaluation is solid. I lean towards acceptance. UPDATE AFTER REBUTTAL: I thank the authors for their responses and appreciate the inclusion of some of the requested changes in the paper. However, the paper still misses the comparison to other adaptive methods which is the paper's greatest weakness. Therefore, I decided to keep my score at 6. MINOR REMARKS: Caption of Table 1 could use some more spacing
ICLR
Title Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition Abstract In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor (Ardywibowo et al., 2019; Yang et al., 2020), time needed to complete an experiment (Kiefer, 1959), or manpower required to monitor a hospital patient (Pierskalla & Brailer, 1994). Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts (Shen & Varshney, 2013; Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more interpretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them (Li et al., 2016; Mahendran & Vedaldi, 2015), (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients (Selvaraju et al., 2017; Ribeiro et al., 2016), and (3) extracting parts of inputs as justifications for predictions (Lei et al., 2016). Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency (Louizos et al., 2017; Tartaglione et al., 2018; Han et al., 2015). All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics (Gordon et al., 2012; Bloom et al., 2013; Ardywibowo et al., 2019; Zappi et al., 2008), ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016; Spaan & Lima, 2009; Satsangi et al., 2015; Yang et al., 2020). However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset (Anguita et al., 2013), the OPPORTUNITY dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), as well as the NTU-RGB-D dataset (Shahroudy et al., 2016). Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation (Tibshirani, 1996; Zou & Hastie, 2005; Tibshirani, 1997; Sun et al., 2014; Simon et al., 2011). The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. Relaxation of binary random variables has been adopted in Louizos et al. (2017) for network architecture sparsification, and in Yamada et al. (2019); Balın et al. (2019) for static feature selection. Here, we extend the above relaxation for time series data, where unlike previous works, the binary random variables are parameterized locally and are context-dependent, and features are selected adaptively across time. We first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previously selected observed features X t−1i as defined above. Let πi(z|φ) be the set of πti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization (Jang et al., 2016; Maddison et al., 2016), which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1i ,φ) of the previous observations X t−1 i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables zti can be relaxed into continuous random variables z̃ t i as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) (Cho et al., 2014a), a type of Recurrent Neural Network (RNN) (Graves et al., 2013), as shown in Fig- ure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) (Liang & Hu, 2015) to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN (Li et al., 2018) benchmarked on the NTU-RGB-D dataset (Shahroudy et al., 2016), as detailed in Appendix A.4. With the model specified, our method can be applied to existing human activity recognition datasets. Specifically, we are now able to train a prediction model and dynamic feature selection policy offline, and test it on a withheld testing set. The application of our model to online learning is subject to future work. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain (Shen & Varshney, 2013), or relevancy ranking through a filtering strategy (Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and nonRL approaches. Non-RL based approaches vary from assigning certain features to certain activities (Gordon et al., 2012), pre-defining feature subsets for prediction (Bloom et al., 2013; Strubell et al., 2015), optimizing the trade-off between prediction entropy and the number of selected features (Ardywibowo et al., 2019), to building a metaclassifier for sensor selection (Zappi et al., 2008). These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016). These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan & Lima (2009) and Satsangi et al. (2015) formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. (2020) formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction (Vaswani et al., 2017). Attention modules have been recently used for activity recognition (Ma et al., 2019). However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., Liu et al. (2015); Louizos et al. (2017); Frankle & Carbin (2018), but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Selection or skipping along the temporal direction to decide when to memorize vs update model state has been considered in Hu et al. (2019); Campos et al. (2018); Neil et al. (2016). These works either are not context dependent or do not consider energy efficiency or ineterpretability. Additionally, skipping time steps may not be suitable for continuous monitoring tasks including HAR, where we are tasked to give a prediction at every time step. Nevertheless, our dynamic/adaptive feature selection is orthogonal to temporal selection/skipping and we leave exploring the potential integration of these two directions as our future research. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables (Jang et al., 2016; Maddison et al., 2016; Tucker et al., 2017; Grathwohl et al., 2017; Yin & Zhou, 2018). REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2017) employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines (Yin & Zhou, 2018). It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin & Zhou (2018) and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al., 2013), the OPPORTUNITY Dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), and the NTU-RGB-D dataset (Shahroudy et al., 2016). Although there are many other human activity recognition benchmark datasets (Chen et al., 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. For all datasets, we randomly split data both chronologically and by different subjects. More details for each dataset and its corresponding experiment setup is provided under its own subheading in the following and also in Appendix A. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network (Cho et al., 2014b). To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) (Louizos et al., 2017). The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxationbased optimization strategy, we implement the Straight-Through estimator (Hinton et al., 2012) and Augment-REINFORCE-Merge (ARM) gradient estimates (Yin & Zhou, 2018) as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator (Hinton et al., 2012) as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. We also have tested different values for the temperature hyperparameter τ in Appendix D, where we observe that the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. (2020) as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset (Anguita et al., 2013). This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We also have checked the average accuracy of our model on a time-aligned testing set to show that our model is stable for long-term predictions in Appendix E. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of Table 2: Comparison of various models for adaptive monitoring on three activity recognition datasets. *Accuracy metrics and average number of features selected are all in (%). Method UCI HAR OPPORTUNITY ExtraSensoryAccuracy Features Accuracy Features Accuracy F1 Features Adaptive (Ours) λ = 1 97.18 0.28 84.26 15.88 91.14 55.06 11.25 Attention α = 0.5 98.38 49.94 83.42 54.20 90.37 53.29 54.73 Nonadaptive λ = 1 (Louizos et al., 2017) 95.49 14.35 81.63 49.57 91.13 53.18 42.32 No selection (GRU) (Cho et al., 2014b) 96.67 100 84.16 100 91.14 53.53 100 the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Although these features alone may not be exclusively attributed as the only features necessary for prediction under specific activities, such a visualization is useful to retrospectively observe the features selected by our model at each time-point. Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset (Roggen et al., 2010). This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset (Vaizman et al., 2017). This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset (Shahroudy et al., 2016). This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset. D EFFECTS OF THE HYPERPARAMETER τ ON MODEL PERFORMANCE We observe the effects of the temperature hyperparameter in (7) on our model’s performance. To do this, we have tested several hyperparameter values in our experiment with the UCI HAR dataset. The results of our tests can be seen in Figure 9. In general, the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. Once the temperature is set to above 1, we observe a sharp increase in errors. We attribute this to the mismatch between training and testing setups, where in testing, discrete binary values are sampled while in training, the samples are reduced to an equal weighting between the features. E MODEL PERFORMANCE AND STABILITY ACROSS TIME We show the average accuracy over every 1000 seconds of running the model on the testing subjects in the UCI HAR dataset in Table 4. Based on the performance of the model across time, the model is shown to be stable for long-term predictions. In general, there is no clear temporal degradation in the testing performance for this dataset. Instead, the change of prediction errors is mostly dependent on the underlying activity types. F UNION OF ALL FEATURES SELECTED BY THE ADAPTIVE MODEL Here, in addition to showing the average number of selected features, we compute the percentage of all features considered by our model across the full time-length. In other words, the results presented here show the union of selected features across the time horizon. In Section 4, we chose to present the average number of selected features as it directly reflects the number of required sensors for accurate HAR. Hence, it clearly shows the benefits of our proposed dynamic/adaptive feature selection with respect to the power usage for sensor data collection. From Table 5, it is clear that the percentage of all the features considered across the full time-length is also significantly low for each of the three benchmark datasets, which further validates the potential of our dynamic feature selection even when additional operational cost of turning on/off sensors needs to be considered. DYNAMIC FEATURE SELECTION FOR EFFICIENT AND INTERPRETABLE HUMAN ACTIVITY RECOGNITION Anonymous authors Paper under double-blind review ABSTRACT In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor [1, 2], time needed to complete an experiment [3], or manpower required to monitor a hospital patient [4]. Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts [5, 6, 7, 8]. Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity [9, 10, 11, 12]. Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more inter- pretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them [13, 14], (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients [15, 16], and (3) extracting parts of inputs as justifications for predictions [17]. Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency [18, 19, 20]. All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics [21, 22, 1, 23], ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it [24, 25, 26, 27, 28, 2]. However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset [29], the OPPORTUNITY dataset [30], the ExtraSensory dataset [31], as well as the NTU-RGB-D dataset [32]. Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation [9, 12, 33, 34, 35]. The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. To extend the above relaxation for time series data, we first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previous observationsX t−1i . Letπi(z|φ) be the set ofπti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization [36, 37], which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1 i ,φ) of the previous observations X t−1i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables z t i can be relaxed into continuous random variables z̃ti as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) [38], a type of Recurrent Neural Network (RNN) [39], as shown in Figure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) [40] to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN [41] benchmarked on the NTU-RGB-D dataset [32], as detailed in Appendix A.4. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain [5], or relevancy ranking through a filtering strategy [6, 7, 8]. However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and non-RL approaches. Non-RL based approaches vary from assigning certain features to certain activities [21], pre-defining feature subsets for prediction [22, 42], optimizing the trade-off between prediction entropy and the number of selected features [1], to building a metaclassifier for sensor selection [23]. These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction [24, 25, 26]. These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan and Lima [27] and Satsangi et al. [28] formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. [2] formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction [43]. Attention modules have been recently used for activity recognition [44]. However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., [45, 18, 46], but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity [9, 10, 11, 12]. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables [36, 37, 47, 48, 49]. REBAR [47] and RELAX [48] employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines [49]. It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin and Zhou [49] and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset [29], the OPPORTUNITY Dataset [30], the ExtraSensory dataset [31], and the NTU-RGB-D dataset [32]. Although there are many other human activity recognition benchmark datasets [50], we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network [51]. To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) [18]. The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxation-based optimization strategy, we implement the Straight-Through estimator [52] and Augment-REINFORCE-Merge (ARM) gradient estimates [49] as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator [52] as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. [2] as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset [29]. This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset [30]. This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset [31]. This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. 70% for training, 10% for validation, and 20% for testing. The base model we utilize is a one-layer GRU with 2240 neurons for its hidden state. We use a temperature of 0.05 for the Gumbel-Softmax relaxation. We use the binary cross-entropy of the predicted vs. actual labels as the performance measure, where the model outputs a binary decision for each label, representing whether each label is active or not. We do not include the performance loss for the missing labels and scale the total performance loss of the observed labels for each batch by #timepoints×#total labels#observed labels in labelled timepoints . We optimize this scaled loss with a batch size of 100 using the RMSProp optimizer, setting the learning rate to 10−4 and the smoothing constant to 0.99 for 10000 epochs. We then save both the latest model and the best model validated on the validation set. A.4 NTU-RGB-D DATASET We first preprocess the NTU-RGB-D dataset to remove all the samples with missing skeleton data. We then segment the time-series skeleton data across subjects into 66.5% training, 3.5% validation, and 30% testing sets. The baseline model that we have implemented for the NTU-RGB-D dataset is the Independent RNN [41]. This model consists of stacked RNN modules with several additional dropout, batch normalization, and fully connected layers in between. Our architecture closely follows the densely connected independent RNN of [41]. To incorporate feature selection using either our adaptive formulation or an attention-based formulation, we add an additional RNN to the beginning of this model. This RNN takes as input the 25 different joint features and is tasked to select the joints to use for prediction further along the architecture pipeline. Since the joints are in the form of 3D coordinates, our feature selection method is modified such that it selects either all 3 of the X, Y, and Z coordinates of a particular joint, or none at all. Our architecture can be seen in Figure 6. Similar as the baseline method presented in [41], we have trained this architecture using a batch size of 128 and a sequence length of 20 using the Adam optimizer with a patience threshold of 100 iterations. We then save both the latest model and the best model validated on the validation set. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset [32]. This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. Table 3: Comparison of various methods for activity recognition on the NTU-RGB-D dataset. *Accuracies and average number of features selected are in (%). Method Accuracy Features Selected Adaptive 80.54 49.65 Thresholded attention 40.07 52.31 Soft attention 83.28 100 No selection 83.02 100 ba se o f t he sp in e m id dl e of th e sp in e ne ck he ad le ft sh ou ld er le ft el bo w le ft wr ist le ft ha nd rig ht sh ou ld er rig ht e lb ow rig ht w ris t rig ht h an d le ft hi p le ft kn ee le ft an kl e le ft fo ot rig ht h ip rig ht k ne e rig ht a nk le rig ht fo ot sp in e tip o f t he le ft ha nd le ft th um b tip o f t he ri gh t h an d rig ht th um b drink eat brushing teeth brushing hair drop pickup throw sitting down standing up clapping reading writing tear up paper wear jacket take off jacket wear a shoe take off a shoe wear on glasses take off glasses put on a hat/cap take off a hat/cap cheer up hand waving kicking something reach into pocket hopping jump up answer phone playing with phone typing on a keyboard pointing to something with finger taking a selfie check time rub two hands together nod head/bow shake head wipe face salute put the palms together cross hands in front sneeze staggering falling touch head touch chest touch back touch neck nausea or vomiting condition use a fan punching other person kicking other person pushing other person pat on back of other person point finger at the other person hugging other person giving something to other person touch other person's pocket handshaking walking towards each other walking apart from each other 0.4 0.5 0.6 0.7 0.8 Figure 7: Heatmap of sensor feature activations under each activity state of the NTU-RGB-D dataset. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset.
1. What is the novel contribution of the paper regarding sensor-fusion based temporal multi-class inference tasks? 2. How does the proposed approach reduce energy consumption and improve efficiency in clinical care? 3. What are the strengths and weaknesses of the proposed method in representing a consistent policy for predicting the best combination of sensors? 4. How does the trained model determine the optimal density of measurements, and what are the implications of this feature selection? 5. Are there any concerns or limitations regarding the explainability and interpretability of the policy deciding the new measurement?
Review
Review The authors provide a novel combination of known architectures to an important use case of reducing the density of required measurements in sensor-fusion based temporal multi-class inference tasks. This has implications in energy consumptions of wearable sensors. but could even generalise to measurement timings in clinical care to make the work of nurses more efficient, and reduce the stress caused by some medical procedures.. The authors represent a way to train consistent policy that predicts the best combination of sensors to estimate the state of the subjects. They have found that a smaller set of features. is more explainable than the full set of features. However, I think that this somewhat of an overpromise. The trained model gives the optimal density of the measurements and can discern also if old values and features measured are till OK for the inference. This does not mean that those measurements are not needed at all in the features. One can only argue that the required features can be estimated from the older measurement. So, the current set of active sensors is not the full set of required measurement values and can not be exclusively used to explain the logic of the system. Even more, the logic of the policy deciding the new measurement is not discussed in an explainability context. The authors provide no data on this. It may be just an estimate the derivative of the signal and ignore a new measurement, if it's time derivative is small enough. As a summary , I support publication of the manuscript, provided the authors modify the message on the interpretable features.
ICLR
Title Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition Abstract In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor (Ardywibowo et al., 2019; Yang et al., 2020), time needed to complete an experiment (Kiefer, 1959), or manpower required to monitor a hospital patient (Pierskalla & Brailer, 1994). Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts (Shen & Varshney, 2013; Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more interpretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them (Li et al., 2016; Mahendran & Vedaldi, 2015), (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients (Selvaraju et al., 2017; Ribeiro et al., 2016), and (3) extracting parts of inputs as justifications for predictions (Lei et al., 2016). Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency (Louizos et al., 2017; Tartaglione et al., 2018; Han et al., 2015). All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics (Gordon et al., 2012; Bloom et al., 2013; Ardywibowo et al., 2019; Zappi et al., 2008), ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016; Spaan & Lima, 2009; Satsangi et al., 2015; Yang et al., 2020). However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset (Anguita et al., 2013), the OPPORTUNITY dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), as well as the NTU-RGB-D dataset (Shahroudy et al., 2016). Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation (Tibshirani, 1996; Zou & Hastie, 2005; Tibshirani, 1997; Sun et al., 2014; Simon et al., 2011). The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. Relaxation of binary random variables has been adopted in Louizos et al. (2017) for network architecture sparsification, and in Yamada et al. (2019); Balın et al. (2019) for static feature selection. Here, we extend the above relaxation for time series data, where unlike previous works, the binary random variables are parameterized locally and are context-dependent, and features are selected adaptively across time. We first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previously selected observed features X t−1i as defined above. Let πi(z|φ) be the set of πti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization (Jang et al., 2016; Maddison et al., 2016), which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1i ,φ) of the previous observations X t−1 i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables zti can be relaxed into continuous random variables z̃ t i as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) (Cho et al., 2014a), a type of Recurrent Neural Network (RNN) (Graves et al., 2013), as shown in Fig- ure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) (Liang & Hu, 2015) to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN (Li et al., 2018) benchmarked on the NTU-RGB-D dataset (Shahroudy et al., 2016), as detailed in Appendix A.4. With the model specified, our method can be applied to existing human activity recognition datasets. Specifically, we are now able to train a prediction model and dynamic feature selection policy offline, and test it on a withheld testing set. The application of our model to online learning is subject to future work. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain (Shen & Varshney, 2013), or relevancy ranking through a filtering strategy (Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and nonRL approaches. Non-RL based approaches vary from assigning certain features to certain activities (Gordon et al., 2012), pre-defining feature subsets for prediction (Bloom et al., 2013; Strubell et al., 2015), optimizing the trade-off between prediction entropy and the number of selected features (Ardywibowo et al., 2019), to building a metaclassifier for sensor selection (Zappi et al., 2008). These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016). These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan & Lima (2009) and Satsangi et al. (2015) formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. (2020) formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction (Vaswani et al., 2017). Attention modules have been recently used for activity recognition (Ma et al., 2019). However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., Liu et al. (2015); Louizos et al. (2017); Frankle & Carbin (2018), but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Selection or skipping along the temporal direction to decide when to memorize vs update model state has been considered in Hu et al. (2019); Campos et al. (2018); Neil et al. (2016). These works either are not context dependent or do not consider energy efficiency or ineterpretability. Additionally, skipping time steps may not be suitable for continuous monitoring tasks including HAR, where we are tasked to give a prediction at every time step. Nevertheless, our dynamic/adaptive feature selection is orthogonal to temporal selection/skipping and we leave exploring the potential integration of these two directions as our future research. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables (Jang et al., 2016; Maddison et al., 2016; Tucker et al., 2017; Grathwohl et al., 2017; Yin & Zhou, 2018). REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2017) employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines (Yin & Zhou, 2018). It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin & Zhou (2018) and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al., 2013), the OPPORTUNITY Dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), and the NTU-RGB-D dataset (Shahroudy et al., 2016). Although there are many other human activity recognition benchmark datasets (Chen et al., 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. For all datasets, we randomly split data both chronologically and by different subjects. More details for each dataset and its corresponding experiment setup is provided under its own subheading in the following and also in Appendix A. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network (Cho et al., 2014b). To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) (Louizos et al., 2017). The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxationbased optimization strategy, we implement the Straight-Through estimator (Hinton et al., 2012) and Augment-REINFORCE-Merge (ARM) gradient estimates (Yin & Zhou, 2018) as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator (Hinton et al., 2012) as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. We also have tested different values for the temperature hyperparameter τ in Appendix D, where we observe that the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. (2020) as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset (Anguita et al., 2013). This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We also have checked the average accuracy of our model on a time-aligned testing set to show that our model is stable for long-term predictions in Appendix E. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of Table 2: Comparison of various models for adaptive monitoring on three activity recognition datasets. *Accuracy metrics and average number of features selected are all in (%). Method UCI HAR OPPORTUNITY ExtraSensoryAccuracy Features Accuracy Features Accuracy F1 Features Adaptive (Ours) λ = 1 97.18 0.28 84.26 15.88 91.14 55.06 11.25 Attention α = 0.5 98.38 49.94 83.42 54.20 90.37 53.29 54.73 Nonadaptive λ = 1 (Louizos et al., 2017) 95.49 14.35 81.63 49.57 91.13 53.18 42.32 No selection (GRU) (Cho et al., 2014b) 96.67 100 84.16 100 91.14 53.53 100 the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Although these features alone may not be exclusively attributed as the only features necessary for prediction under specific activities, such a visualization is useful to retrospectively observe the features selected by our model at each time-point. Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset (Roggen et al., 2010). This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset (Vaizman et al., 2017). This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset (Shahroudy et al., 2016). This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset. D EFFECTS OF THE HYPERPARAMETER τ ON MODEL PERFORMANCE We observe the effects of the temperature hyperparameter in (7) on our model’s performance. To do this, we have tested several hyperparameter values in our experiment with the UCI HAR dataset. The results of our tests can be seen in Figure 9. In general, the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. Once the temperature is set to above 1, we observe a sharp increase in errors. We attribute this to the mismatch between training and testing setups, where in testing, discrete binary values are sampled while in training, the samples are reduced to an equal weighting between the features. E MODEL PERFORMANCE AND STABILITY ACROSS TIME We show the average accuracy over every 1000 seconds of running the model on the testing subjects in the UCI HAR dataset in Table 4. Based on the performance of the model across time, the model is shown to be stable for long-term predictions. In general, there is no clear temporal degradation in the testing performance for this dataset. Instead, the change of prediction errors is mostly dependent on the underlying activity types. F UNION OF ALL FEATURES SELECTED BY THE ADAPTIVE MODEL Here, in addition to showing the average number of selected features, we compute the percentage of all features considered by our model across the full time-length. In other words, the results presented here show the union of selected features across the time horizon. In Section 4, we chose to present the average number of selected features as it directly reflects the number of required sensors for accurate HAR. Hence, it clearly shows the benefits of our proposed dynamic/adaptive feature selection with respect to the power usage for sensor data collection. From Table 5, it is clear that the percentage of all the features considered across the full time-length is also significantly low for each of the three benchmark datasets, which further validates the potential of our dynamic feature selection even when additional operational cost of turning on/off sensors needs to be considered. DYNAMIC FEATURE SELECTION FOR EFFICIENT AND INTERPRETABLE HUMAN ACTIVITY RECOGNITION Anonymous authors Paper under double-blind review ABSTRACT In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor [1, 2], time needed to complete an experiment [3], or manpower required to monitor a hospital patient [4]. Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts [5, 6, 7, 8]. Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity [9, 10, 11, 12]. Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more inter- pretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them [13, 14], (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients [15, 16], and (3) extracting parts of inputs as justifications for predictions [17]. Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency [18, 19, 20]. All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics [21, 22, 1, 23], ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it [24, 25, 26, 27, 28, 2]. However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset [29], the OPPORTUNITY dataset [30], the ExtraSensory dataset [31], as well as the NTU-RGB-D dataset [32]. Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation [9, 12, 33, 34, 35]. The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. To extend the above relaxation for time series data, we first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previous observationsX t−1i . Letπi(z|φ) be the set ofπti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization [36, 37], which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1 i ,φ) of the previous observations X t−1i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables z t i can be relaxed into continuous random variables z̃ti as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) [38], a type of Recurrent Neural Network (RNN) [39], as shown in Figure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) [40] to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN [41] benchmarked on the NTU-RGB-D dataset [32], as detailed in Appendix A.4. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain [5], or relevancy ranking through a filtering strategy [6, 7, 8]. However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and non-RL approaches. Non-RL based approaches vary from assigning certain features to certain activities [21], pre-defining feature subsets for prediction [22, 42], optimizing the trade-off between prediction entropy and the number of selected features [1], to building a metaclassifier for sensor selection [23]. These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction [24, 25, 26]. These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan and Lima [27] and Satsangi et al. [28] formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. [2] formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction [43]. Attention modules have been recently used for activity recognition [44]. However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., [45, 18, 46], but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity [9, 10, 11, 12]. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables [36, 37, 47, 48, 49]. REBAR [47] and RELAX [48] employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines [49]. It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin and Zhou [49] and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset [29], the OPPORTUNITY Dataset [30], the ExtraSensory dataset [31], and the NTU-RGB-D dataset [32]. Although there are many other human activity recognition benchmark datasets [50], we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network [51]. To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) [18]. The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxation-based optimization strategy, we implement the Straight-Through estimator [52] and Augment-REINFORCE-Merge (ARM) gradient estimates [49] as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator [52] as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. [2] as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset [29]. This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset [30]. This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset [31]. This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. 70% for training, 10% for validation, and 20% for testing. The base model we utilize is a one-layer GRU with 2240 neurons for its hidden state. We use a temperature of 0.05 for the Gumbel-Softmax relaxation. We use the binary cross-entropy of the predicted vs. actual labels as the performance measure, where the model outputs a binary decision for each label, representing whether each label is active or not. We do not include the performance loss for the missing labels and scale the total performance loss of the observed labels for each batch by #timepoints×#total labels#observed labels in labelled timepoints . We optimize this scaled loss with a batch size of 100 using the RMSProp optimizer, setting the learning rate to 10−4 and the smoothing constant to 0.99 for 10000 epochs. We then save both the latest model and the best model validated on the validation set. A.4 NTU-RGB-D DATASET We first preprocess the NTU-RGB-D dataset to remove all the samples with missing skeleton data. We then segment the time-series skeleton data across subjects into 66.5% training, 3.5% validation, and 30% testing sets. The baseline model that we have implemented for the NTU-RGB-D dataset is the Independent RNN [41]. This model consists of stacked RNN modules with several additional dropout, batch normalization, and fully connected layers in between. Our architecture closely follows the densely connected independent RNN of [41]. To incorporate feature selection using either our adaptive formulation or an attention-based formulation, we add an additional RNN to the beginning of this model. This RNN takes as input the 25 different joint features and is tasked to select the joints to use for prediction further along the architecture pipeline. Since the joints are in the form of 3D coordinates, our feature selection method is modified such that it selects either all 3 of the X, Y, and Z coordinates of a particular joint, or none at all. Our architecture can be seen in Figure 6. Similar as the baseline method presented in [41], we have trained this architecture using a batch size of 128 and a sequence length of 20 using the Adam optimizer with a patience threshold of 100 iterations. We then save both the latest model and the best model validated on the validation set. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset [32]. This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. Table 3: Comparison of various methods for activity recognition on the NTU-RGB-D dataset. *Accuracies and average number of features selected are in (%). Method Accuracy Features Selected Adaptive 80.54 49.65 Thresholded attention 40.07 52.31 Soft attention 83.28 100 No selection 83.02 100 ba se o f t he sp in e m id dl e of th e sp in e ne ck he ad le ft sh ou ld er le ft el bo w le ft wr ist le ft ha nd rig ht sh ou ld er rig ht e lb ow rig ht w ris t rig ht h an d le ft hi p le ft kn ee le ft an kl e le ft fo ot rig ht h ip rig ht k ne e rig ht a nk le rig ht fo ot sp in e tip o f t he le ft ha nd le ft th um b tip o f t he ri gh t h an d rig ht th um b drink eat brushing teeth brushing hair drop pickup throw sitting down standing up clapping reading writing tear up paper wear jacket take off jacket wear a shoe take off a shoe wear on glasses take off glasses put on a hat/cap take off a hat/cap cheer up hand waving kicking something reach into pocket hopping jump up answer phone playing with phone typing on a keyboard pointing to something with finger taking a selfie check time rub two hands together nod head/bow shake head wipe face salute put the palms together cross hands in front sneeze staggering falling touch head touch chest touch back touch neck nausea or vomiting condition use a fan punching other person kicking other person pushing other person pat on back of other person point finger at the other person hugging other person giving something to other person touch other person's pocket handshaking walking towards each other walking apart from each other 0.4 0.5 0.6 0.7 0.8 Figure 7: Heatmap of sensor feature activations under each activity state of the NTU-RGB-D dataset. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset.
1. What is the focus of the paper regarding feature selection? 2. What are the strengths and weaknesses of the proposed method in the paper? 3. Are there any missing works or citations in the paper that are relevant to the topic? 4. How does the reviewer assess the novelty and contribution of the paper? 5. What additional experiments or results could improve the paper's quality?
Review
Review This paper presents a learning-based binary sampling mechanism for feature selection. It filters salient feature dimensions by sampling from a Gumbel-softmax distribution, which is differentiable and can be trained with other network parameters. The proposed method is evaluated on several Human Activity Recognition (HAR) datasets. The positive and negative points of this paper can be summarized as following: pros: This paper is well written and is easy to follow. The experimental evaluations give positive results. cons: Important previous works are missing. Learning to generate categorical samples for RNNs is not a fresh idea. Actually, [a] has already employs Gumbel-softmax to sample scales in order to dynamically control the temporal pattern learning; More generally, the topic of this paper is connected to a amount of previous works aiming to adaptively decide how/when to memorize/update the inputs/states, such as [b] and [c]. These works should also been cited by this paper. With these missing works taking into account, the novelty of this paper becomes incremental and contribution is trivial. Integrating Gumbel-softmax sampling with RNN cells is very straightforward, and the motivation of applying Gumbel-softmax is very similar to [a]. While [a] is proposed for general sequence tasks, the proposed method seems to work only for HAR with multi-dimensional inputs. Since \tau is the only hyper parameter of Gumbel-softmax, evaluations on how the value of \tau could impact the performance can be important. Yet no such results are reported in the paper. From the original Gumbel-softmax paper we can see a sample can approximate to a one-hot vector when \tau is small and be closed to a uniform distribution when \tau goes large. So it is very likely that the performance will become unstable as \tau changes. Showing such experimental results could be definitely improve the paper quality. I would suggest to report the means and stds of accuracies with different sampling seeds. Summary: Considering the concerns listed above, I believe there are problems that outweighs the strengths of this paper. They should be fixed before acceptance. [a] H Hu, et al. Learning to Adaptively Scale Recurrent Neural Networks. AAAI 2019 [b] V Campos, et al. Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. ICLR 2018 [c] D Neil, et al. Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences. NIPS 2016
ICLR
Title Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition Abstract In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor (Ardywibowo et al., 2019; Yang et al., 2020), time needed to complete an experiment (Kiefer, 1959), or manpower required to monitor a hospital patient (Pierskalla & Brailer, 1994). Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts (Shen & Varshney, 2013; Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more interpretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them (Li et al., 2016; Mahendran & Vedaldi, 2015), (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients (Selvaraju et al., 2017; Ribeiro et al., 2016), and (3) extracting parts of inputs as justifications for predictions (Lei et al., 2016). Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency (Louizos et al., 2017; Tartaglione et al., 2018; Han et al., 2015). All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics (Gordon et al., 2012; Bloom et al., 2013; Ardywibowo et al., 2019; Zappi et al., 2008), ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016; Spaan & Lima, 2009; Satsangi et al., 2015; Yang et al., 2020). However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset (Anguita et al., 2013), the OPPORTUNITY dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), as well as the NTU-RGB-D dataset (Shahroudy et al., 2016). Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation (Tibshirani, 1996; Zou & Hastie, 2005; Tibshirani, 1997; Sun et al., 2014; Simon et al., 2011). The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. Relaxation of binary random variables has been adopted in Louizos et al. (2017) for network architecture sparsification, and in Yamada et al. (2019); Balın et al. (2019) for static feature selection. Here, we extend the above relaxation for time series data, where unlike previous works, the binary random variables are parameterized locally and are context-dependent, and features are selected adaptively across time. We first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previously selected observed features X t−1i as defined above. Let πi(z|φ) be the set of πti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization (Jang et al., 2016; Maddison et al., 2016), which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1i ,φ) of the previous observations X t−1 i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables zti can be relaxed into continuous random variables z̃ t i as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) (Cho et al., 2014a), a type of Recurrent Neural Network (RNN) (Graves et al., 2013), as shown in Fig- ure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) (Liang & Hu, 2015) to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN (Li et al., 2018) benchmarked on the NTU-RGB-D dataset (Shahroudy et al., 2016), as detailed in Appendix A.4. With the model specified, our method can be applied to existing human activity recognition datasets. Specifically, we are now able to train a prediction model and dynamic feature selection policy offline, and test it on a withheld testing set. The application of our model to online learning is subject to future work. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain (Shen & Varshney, 2013), or relevancy ranking through a filtering strategy (Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and nonRL approaches. Non-RL based approaches vary from assigning certain features to certain activities (Gordon et al., 2012), pre-defining feature subsets for prediction (Bloom et al., 2013; Strubell et al., 2015), optimizing the trade-off between prediction entropy and the number of selected features (Ardywibowo et al., 2019), to building a metaclassifier for sensor selection (Zappi et al., 2008). These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016). These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan & Lima (2009) and Satsangi et al. (2015) formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. (2020) formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction (Vaswani et al., 2017). Attention modules have been recently used for activity recognition (Ma et al., 2019). However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., Liu et al. (2015); Louizos et al. (2017); Frankle & Carbin (2018), but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Selection or skipping along the temporal direction to decide when to memorize vs update model state has been considered in Hu et al. (2019); Campos et al. (2018); Neil et al. (2016). These works either are not context dependent or do not consider energy efficiency or ineterpretability. Additionally, skipping time steps may not be suitable for continuous monitoring tasks including HAR, where we are tasked to give a prediction at every time step. Nevertheless, our dynamic/adaptive feature selection is orthogonal to temporal selection/skipping and we leave exploring the potential integration of these two directions as our future research. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables (Jang et al., 2016; Maddison et al., 2016; Tucker et al., 2017; Grathwohl et al., 2017; Yin & Zhou, 2018). REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2017) employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines (Yin & Zhou, 2018). It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin & Zhou (2018) and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al., 2013), the OPPORTUNITY Dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), and the NTU-RGB-D dataset (Shahroudy et al., 2016). Although there are many other human activity recognition benchmark datasets (Chen et al., 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. For all datasets, we randomly split data both chronologically and by different subjects. More details for each dataset and its corresponding experiment setup is provided under its own subheading in the following and also in Appendix A. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network (Cho et al., 2014b). To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) (Louizos et al., 2017). The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxationbased optimization strategy, we implement the Straight-Through estimator (Hinton et al., 2012) and Augment-REINFORCE-Merge (ARM) gradient estimates (Yin & Zhou, 2018) as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator (Hinton et al., 2012) as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. We also have tested different values for the temperature hyperparameter τ in Appendix D, where we observe that the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. (2020) as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset (Anguita et al., 2013). This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We also have checked the average accuracy of our model on a time-aligned testing set to show that our model is stable for long-term predictions in Appendix E. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of Table 2: Comparison of various models for adaptive monitoring on three activity recognition datasets. *Accuracy metrics and average number of features selected are all in (%). Method UCI HAR OPPORTUNITY ExtraSensoryAccuracy Features Accuracy Features Accuracy F1 Features Adaptive (Ours) λ = 1 97.18 0.28 84.26 15.88 91.14 55.06 11.25 Attention α = 0.5 98.38 49.94 83.42 54.20 90.37 53.29 54.73 Nonadaptive λ = 1 (Louizos et al., 2017) 95.49 14.35 81.63 49.57 91.13 53.18 42.32 No selection (GRU) (Cho et al., 2014b) 96.67 100 84.16 100 91.14 53.53 100 the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Although these features alone may not be exclusively attributed as the only features necessary for prediction under specific activities, such a visualization is useful to retrospectively observe the features selected by our model at each time-point. Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset (Roggen et al., 2010). This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset (Vaizman et al., 2017). This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset (Shahroudy et al., 2016). This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset. D EFFECTS OF THE HYPERPARAMETER τ ON MODEL PERFORMANCE We observe the effects of the temperature hyperparameter in (7) on our model’s performance. To do this, we have tested several hyperparameter values in our experiment with the UCI HAR dataset. The results of our tests can be seen in Figure 9. In general, the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. Once the temperature is set to above 1, we observe a sharp increase in errors. We attribute this to the mismatch between training and testing setups, where in testing, discrete binary values are sampled while in training, the samples are reduced to an equal weighting between the features. E MODEL PERFORMANCE AND STABILITY ACROSS TIME We show the average accuracy over every 1000 seconds of running the model on the testing subjects in the UCI HAR dataset in Table 4. Based on the performance of the model across time, the model is shown to be stable for long-term predictions. In general, there is no clear temporal degradation in the testing performance for this dataset. Instead, the change of prediction errors is mostly dependent on the underlying activity types. F UNION OF ALL FEATURES SELECTED BY THE ADAPTIVE MODEL Here, in addition to showing the average number of selected features, we compute the percentage of all features considered by our model across the full time-length. In other words, the results presented here show the union of selected features across the time horizon. In Section 4, we chose to present the average number of selected features as it directly reflects the number of required sensors for accurate HAR. Hence, it clearly shows the benefits of our proposed dynamic/adaptive feature selection with respect to the power usage for sensor data collection. From Table 5, it is clear that the percentage of all the features considered across the full time-length is also significantly low for each of the three benchmark datasets, which further validates the potential of our dynamic feature selection even when additional operational cost of turning on/off sensors needs to be considered. DYNAMIC FEATURE SELECTION FOR EFFICIENT AND INTERPRETABLE HUMAN ACTIVITY RECOGNITION Anonymous authors Paper under double-blind review ABSTRACT In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor [1, 2], time needed to complete an experiment [3], or manpower required to monitor a hospital patient [4]. Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts [5, 6, 7, 8]. Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity [9, 10, 11, 12]. Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more inter- pretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them [13, 14], (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients [15, 16], and (3) extracting parts of inputs as justifications for predictions [17]. Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency [18, 19, 20]. All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics [21, 22, 1, 23], ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it [24, 25, 26, 27, 28, 2]. However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset [29], the OPPORTUNITY dataset [30], the ExtraSensory dataset [31], as well as the NTU-RGB-D dataset [32]. Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation [9, 12, 33, 34, 35]. The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. To extend the above relaxation for time series data, we first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previous observationsX t−1i . Letπi(z|φ) be the set ofπti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization [36, 37], which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1 i ,φ) of the previous observations X t−1i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables z t i can be relaxed into continuous random variables z̃ti as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) [38], a type of Recurrent Neural Network (RNN) [39], as shown in Figure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) [40] to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN [41] benchmarked on the NTU-RGB-D dataset [32], as detailed in Appendix A.4. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain [5], or relevancy ranking through a filtering strategy [6, 7, 8]. However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and non-RL approaches. Non-RL based approaches vary from assigning certain features to certain activities [21], pre-defining feature subsets for prediction [22, 42], optimizing the trade-off between prediction entropy and the number of selected features [1], to building a metaclassifier for sensor selection [23]. These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction [24, 25, 26]. These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan and Lima [27] and Satsangi et al. [28] formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. [2] formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction [43]. Attention modules have been recently used for activity recognition [44]. However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., [45, 18, 46], but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity [9, 10, 11, 12]. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables [36, 37, 47, 48, 49]. REBAR [47] and RELAX [48] employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines [49]. It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin and Zhou [49] and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset [29], the OPPORTUNITY Dataset [30], the ExtraSensory dataset [31], and the NTU-RGB-D dataset [32]. Although there are many other human activity recognition benchmark datasets [50], we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network [51]. To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) [18]. The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxation-based optimization strategy, we implement the Straight-Through estimator [52] and Augment-REINFORCE-Merge (ARM) gradient estimates [49] as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator [52] as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. [2] as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset [29]. This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset [30]. This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset [31]. This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. 70% for training, 10% for validation, and 20% for testing. The base model we utilize is a one-layer GRU with 2240 neurons for its hidden state. We use a temperature of 0.05 for the Gumbel-Softmax relaxation. We use the binary cross-entropy of the predicted vs. actual labels as the performance measure, where the model outputs a binary decision for each label, representing whether each label is active or not. We do not include the performance loss for the missing labels and scale the total performance loss of the observed labels for each batch by #timepoints×#total labels#observed labels in labelled timepoints . We optimize this scaled loss with a batch size of 100 using the RMSProp optimizer, setting the learning rate to 10−4 and the smoothing constant to 0.99 for 10000 epochs. We then save both the latest model and the best model validated on the validation set. A.4 NTU-RGB-D DATASET We first preprocess the NTU-RGB-D dataset to remove all the samples with missing skeleton data. We then segment the time-series skeleton data across subjects into 66.5% training, 3.5% validation, and 30% testing sets. The baseline model that we have implemented for the NTU-RGB-D dataset is the Independent RNN [41]. This model consists of stacked RNN modules with several additional dropout, batch normalization, and fully connected layers in between. Our architecture closely follows the densely connected independent RNN of [41]. To incorporate feature selection using either our adaptive formulation or an attention-based formulation, we add an additional RNN to the beginning of this model. This RNN takes as input the 25 different joint features and is tasked to select the joints to use for prediction further along the architecture pipeline. Since the joints are in the form of 3D coordinates, our feature selection method is modified such that it selects either all 3 of the X, Y, and Z coordinates of a particular joint, or none at all. Our architecture can be seen in Figure 6. Similar as the baseline method presented in [41], we have trained this architecture using a batch size of 128 and a sequence length of 20 using the Adam optimizer with a patience threshold of 100 iterations. We then save both the latest model and the best model validated on the validation set. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset [32]. This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. Table 3: Comparison of various methods for activity recognition on the NTU-RGB-D dataset. *Accuracies and average number of features selected are in (%). Method Accuracy Features Selected Adaptive 80.54 49.65 Thresholded attention 40.07 52.31 Soft attention 83.28 100 No selection 83.02 100 ba se o f t he sp in e m id dl e of th e sp in e ne ck he ad le ft sh ou ld er le ft el bo w le ft wr ist le ft ha nd rig ht sh ou ld er rig ht e lb ow rig ht w ris t rig ht h an d le ft hi p le ft kn ee le ft an kl e le ft fo ot rig ht h ip rig ht k ne e rig ht a nk le rig ht fo ot sp in e tip o f t he le ft ha nd le ft th um b tip o f t he ri gh t h an d rig ht th um b drink eat brushing teeth brushing hair drop pickup throw sitting down standing up clapping reading writing tear up paper wear jacket take off jacket wear a shoe take off a shoe wear on glasses take off glasses put on a hat/cap take off a hat/cap cheer up hand waving kicking something reach into pocket hopping jump up answer phone playing with phone typing on a keyboard pointing to something with finger taking a selfie check time rub two hands together nod head/bow shake head wipe face salute put the palms together cross hands in front sneeze staggering falling touch head touch chest touch back touch neck nausea or vomiting condition use a fan punching other person kicking other person pushing other person pat on back of other person point finger at the other person hugging other person giving something to other person touch other person's pocket handshaking walking towards each other walking apart from each other 0.4 0.5 0.6 0.7 0.8 Figure 7: Heatmap of sensor feature activations under each activity state of the NTU-RGB-D dataset. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset.
1. What is the focus of the reviewed paper, and what are the proposed approaches? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its presentation and comparisons with prior works? 3. Are there any concerns or questions regarding the method's application in testing phases, training, and inference speed? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The authors tackle the important problem of feature selection. They propose to use differentiable gates with an RNN architecture to select different subsets of features for each time point. I think the idea and method are interesting, and the method could be useful. However, I have crucial problems with the way the paper is presented. Most importantly, the authors describe the l_0 relaxation of Bernoulli random variables as if it is their own contribution. They describe existing known results under a section titles “Methodology” as if they are the first to present Bernoulli random variables to feature selection or that they are the first to relax them using the Gumbel Softmax trick. They also use the word: “we derive” (p.3). This is wrong! And misleading! The same relaxation appears in [1] and used for model sparsification, the descriptions are almost identical to what appears in [1] with almost zero credit to the authors in [1] (a citation appears in related work in a different context). Bernoulli relaxation was already used for feature selection, in [2], and [3], these papers were not even mentioned. The reader can think the authors are the first to introduce such relaxation into the problem of feature selection, while this is again, clearly wrong. The authors are well aware of that this relaxation was presented in [1], and in the experiment section they describe the baseline which solves (4) by citing [1] (citation [18] in their paper), this is again in contradiction to the way they describe the relaxation as if it is their own contribution. Putting these CRITICAL comments aside, I think the results are misleading. Specifically, comparing the average number of selected features to the (constant) number of selected features of the non-adaptive method is misleading. You need to compare the union of selected features by your method to the constant number, otherwise, there is no way to infer if this feature selection method can result in any compression of the model or could lead to training or inference speed up. Given that this is what you measure since you still need all the features to use your model, what are the advantages of the method? Only interpretability? The authors do not explain how the method is used in the testing phase, is the randomness removed? How exactly? The authors do not explain how training/ testing is performed, this appears in the appendix but should be moved to the main texts. The authors should compare the method to the distribution suggested in [1], which seems more suitable for feature selection than the Concrete distribution (used by the authors). Citations are not in the correct ICLR format. Some pros: I like the examples used in the paper as well as the comparison to ARM, ST, ST-ARM. To conclude, I am voting to reject the paper, based on all the reasons mentioned above. [1] Louizos, Christos, Max Welling, and Diederik P. Kingma. "Learning Sparse Neural Networks through L 0 Regularization." ICLR, 2018. [2] Yamada, Y., Lindenbaum, O., Negahban, S., & Kluger, Y. Feature selection using stochastic gates. ICML, 2020. [3] Balın, Muhammed Fatih, Abubakar Abid, and James Zou. "Concrete autoencoders: Differentiable feature selection and reconstruction." ICML. 2019.
ICLR
Title Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition Abstract In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor (Ardywibowo et al., 2019; Yang et al., 2020), time needed to complete an experiment (Kiefer, 1959), or manpower required to monitor a hospital patient (Pierskalla & Brailer, 1994). Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts (Shen & Varshney, 2013; Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more interpretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them (Li et al., 2016; Mahendran & Vedaldi, 2015), (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients (Selvaraju et al., 2017; Ribeiro et al., 2016), and (3) extracting parts of inputs as justifications for predictions (Lei et al., 2016). Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency (Louizos et al., 2017; Tartaglione et al., 2018; Han et al., 2015). All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics (Gordon et al., 2012; Bloom et al., 2013; Ardywibowo et al., 2019; Zappi et al., 2008), ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016; Spaan & Lima, 2009; Satsangi et al., 2015; Yang et al., 2020). However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset (Anguita et al., 2013), the OPPORTUNITY dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), as well as the NTU-RGB-D dataset (Shahroudy et al., 2016). Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation (Tibshirani, 1996; Zou & Hastie, 2005; Tibshirani, 1997; Sun et al., 2014; Simon et al., 2011). The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. Relaxation of binary random variables has been adopted in Louizos et al. (2017) for network architecture sparsification, and in Yamada et al. (2019); Balın et al. (2019) for static feature selection. Here, we extend the above relaxation for time series data, where unlike previous works, the binary random variables are parameterized locally and are context-dependent, and features are selected adaptively across time. We first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previously selected observed features X t−1i as defined above. Let πi(z|φ) be the set of πti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization (Jang et al., 2016; Maddison et al., 2016), which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1i ,φ) of the previous observations X t−1 i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables zti can be relaxed into continuous random variables z̃ t i as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) (Cho et al., 2014a), a type of Recurrent Neural Network (RNN) (Graves et al., 2013), as shown in Fig- ure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) (Liang & Hu, 2015) to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN (Li et al., 2018) benchmarked on the NTU-RGB-D dataset (Shahroudy et al., 2016), as detailed in Appendix A.4. With the model specified, our method can be applied to existing human activity recognition datasets. Specifically, we are now able to train a prediction model and dynamic feature selection policy offline, and test it on a withheld testing set. The application of our model to online learning is subject to future work. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain (Shen & Varshney, 2013), or relevancy ranking through a filtering strategy (Aziz et al., 2016; Ertuǧrul & Kaya, 2017; Cheng et al., 2018). However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and nonRL approaches. Non-RL based approaches vary from assigning certain features to certain activities (Gordon et al., 2012), pre-defining feature subsets for prediction (Bloom et al., 2013; Strubell et al., 2015), optimizing the trade-off between prediction entropy and the number of selected features (Ardywibowo et al., 2019), to building a metaclassifier for sensor selection (Zappi et al., 2008). These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction (He & Eisner, 2012; Karayev et al., 2013; Kolamunna et al., 2016). These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan & Lima (2009) and Satsangi et al. (2015) formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. (2020) formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction (Vaswani et al., 2017). Attention modules have been recently used for activity recognition (Ma et al., 2019). However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., Liu et al. (2015); Louizos et al. (2017); Frankle & Carbin (2018), but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity (Tibshirani, 1996; Friedman et al., 2010, 2008; Zou & Hastie, 2005). Selection or skipping along the temporal direction to decide when to memorize vs update model state has been considered in Hu et al. (2019); Campos et al. (2018); Neil et al. (2016). These works either are not context dependent or do not consider energy efficiency or ineterpretability. Additionally, skipping time steps may not be suitable for continuous monitoring tasks including HAR, where we are tasked to give a prediction at every time step. Nevertheless, our dynamic/adaptive feature selection is orthogonal to temporal selection/skipping and we leave exploring the potential integration of these two directions as our future research. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables (Jang et al., 2016; Maddison et al., 2016; Tucker et al., 2017; Grathwohl et al., 2017; Yin & Zhou, 2018). REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2017) employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines (Yin & Zhou, 2018). It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin & Zhou (2018) and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al., 2013), the OPPORTUNITY Dataset (Roggen et al., 2010), the ExtraSensory dataset (Vaizman et al., 2017), and the NTU-RGB-D dataset (Shahroudy et al., 2016). Although there are many other human activity recognition benchmark datasets (Chen et al., 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. For all datasets, we randomly split data both chronologically and by different subjects. More details for each dataset and its corresponding experiment setup is provided under its own subheading in the following and also in Appendix A. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network (Cho et al., 2014b). To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) (Louizos et al., 2017). The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxationbased optimization strategy, we implement the Straight-Through estimator (Hinton et al., 2012) and Augment-REINFORCE-Merge (ARM) gradient estimates (Yin & Zhou, 2018) as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator (Hinton et al., 2012) as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. We also have tested different values for the temperature hyperparameter τ in Appendix D, where we observe that the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. (2020) as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset (Anguita et al., 2013). This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We also have checked the average accuracy of our model on a time-aligned testing set to show that our model is stable for long-term predictions in Appendix E. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of Table 2: Comparison of various models for adaptive monitoring on three activity recognition datasets. *Accuracy metrics and average number of features selected are all in (%). Method UCI HAR OPPORTUNITY ExtraSensoryAccuracy Features Accuracy Features Accuracy F1 Features Adaptive (Ours) λ = 1 97.18 0.28 84.26 15.88 91.14 55.06 11.25 Attention α = 0.5 98.38 49.94 83.42 54.20 90.37 53.29 54.73 Nonadaptive λ = 1 (Louizos et al., 2017) 95.49 14.35 81.63 49.57 91.13 53.18 42.32 No selection (GRU) (Cho et al., 2014b) 96.67 100 84.16 100 91.14 53.53 100 the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Although these features alone may not be exclusively attributed as the only features necessary for prediction under specific activities, such a visualization is useful to retrospectively observe the features selected by our model at each time-point. Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset (Roggen et al., 2010). This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset (Vaizman et al., 2017). This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset (Shahroudy et al., 2016). This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset. D EFFECTS OF THE HYPERPARAMETER τ ON MODEL PERFORMANCE We observe the effects of the temperature hyperparameter in (7) on our model’s performance. To do this, we have tested several hyperparameter values in our experiment with the UCI HAR dataset. The results of our tests can be seen in Figure 9. In general, the settings with the temperature parameters below 1 generally yield the best results with no noticeable performance difference. Once the temperature is set to above 1, we observe a sharp increase in errors. We attribute this to the mismatch between training and testing setups, where in testing, discrete binary values are sampled while in training, the samples are reduced to an equal weighting between the features. E MODEL PERFORMANCE AND STABILITY ACROSS TIME We show the average accuracy over every 1000 seconds of running the model on the testing subjects in the UCI HAR dataset in Table 4. Based on the performance of the model across time, the model is shown to be stable for long-term predictions. In general, there is no clear temporal degradation in the testing performance for this dataset. Instead, the change of prediction errors is mostly dependent on the underlying activity types. F UNION OF ALL FEATURES SELECTED BY THE ADAPTIVE MODEL Here, in addition to showing the average number of selected features, we compute the percentage of all features considered by our model across the full time-length. In other words, the results presented here show the union of selected features across the time horizon. In Section 4, we chose to present the average number of selected features as it directly reflects the number of required sensors for accurate HAR. Hence, it clearly shows the benefits of our proposed dynamic/adaptive feature selection with respect to the power usage for sensor data collection. From Table 5, it is clear that the percentage of all the features considered across the full time-length is also significantly low for each of the three benchmark datasets, which further validates the potential of our dynamic feature selection even when additional operational cost of turning on/off sensors needs to be considered. DYNAMIC FEATURE SELECTION FOR EFFICIENT AND INTERPRETABLE HUMAN ACTIVITY RECOGNITION Anonymous authors Paper under double-blind review ABSTRACT In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as maintaining sensors incur monetary, computation, and energy cost. We propose an adaptive feature selection method that dynamically selects features for prediction at any given time point. We formulate this problem as an `0 minimization problem across time, and cast the combinatorial optimization problem into a stochastic optimization formulation. We then utilize a differentiable relaxation to make the problem amenable to gradient-based optimization. Our evaluations on four activity recognition datasets show that our method achieves a favorable trade-off between performance and the number of features used. Moreover, the dynamically selected features of our approach are shown to be interpretable and associated with the actual activity types. 1 INTRODUCTION Acquiring predictive features is critical for building trustworthy machine learning systems, but this often comes at a daunting cost. Such a cost can be in the form of energy needed to maintain an ambient sensor [1, 2], time needed to complete an experiment [3], or manpower required to monitor a hospital patient [4]. Therefore, it becomes important not only to maintain good performance in the specified task, but also a low cost to gather these features. Indeed, existing Human Activity Recognition (HAR) methods typically use a fixed set of sensors, potentially collecting redundant features to discriminate contexts [5, 6, 7, 8]. Classic feature selection methods such as the LASSO and its variants can address the performance-cost trade-off by optimizing an objective penalized by a term that helps promote feature sparsity [9, 10, 11, 12]. Such feature selection formulations are often static, that is, a fixed set of features are selected a priori. However, different features may offer different predictive power under different contexts. For example, a health worker may not need to monitor a recovering patient as frequently compared to a patient with the declining condition; an experiment performed twice may be redundant; or a smartphone sensor may be predictive when the user is walking but not when the user is in a car. By adaptively selecting which sensor(s) to observe at any given time point, one can further reduce the inherent cost for prediction and achieve a better trade-off between cost and prediction accuracy. In addition to cost-efficiency, an adaptive feature selection formulation can also lead to more inter- pretable and trustworthy predictions. Specifically, the predictions made by the model are only based on the selected features, providing a clear relationship between input features and model predictions. Existing efforts on interpreting models are usually based on some post-analyses of the predictions, including the approaches in (1) visualizing higher level representations or reconstructions of inputs based on them [13, 14], (2) evaluating the sensitivity of predictions to local perturbations of inputs or the input gradients [15, 16], and (3) extracting parts of inputs as justifications for predictions [17]. Another related but orthogonal direction is model compression of training sparse neural networks with the goal of memory and computational efficiency [18, 19, 20]. All these works require collecting all features first and provide post-hoc feature relevance justifications or network pruning. Recent efforts on dynamic feature selection adaptively assign features based on immediate statistics [21, 22, 1, 23], ignoring the information a feature may have on future predictions. Others treat feature selection as a Markov Decision Process (MDP) and use Reinforcement Learning (RL) to solve it [24, 25, 26, 27, 28, 2]. However, solving the RL objective is not straightforward. Besides being sensitive to hyperparameter settings in general, approximations such as state space discretization and greedy approximations of the combinatorial objective were used to make the RL problem tractable. To this end, we propose a dynamic feature selection method that can be easily integrated into existing deep architectures and trained from end to end, enabling task-driven dynamic feature selection. To achieve this, we define a feature selection module that dynamically selects which features to use at any given time point. We then formulate a sequential combinatorial optimization that minimizes the trade-off between the learning task performance and the number of features selected at each time point. To make this problem tractable, we cast this combinatorial optimization problem into a stochastic optimization formulation. We then adopt a differentiable relaxation of the discrete feature selection variables to make it amenable to stochastic gradient descent based optimization. It therefore can be plugged-in and jointly optimized with state-of-the-art neural networks, achieving task-driven feature selection over time. To show our method’s ability to adaptively select features while maintaining good performance, we evaluate it on four time-series activity recognition datasets: the UCI Human Activity Recognition (HAR) dataset [29], the OPPORTUNITY dataset [30], the ExtraSensory dataset [31], as well as the NTU-RGB-D dataset [32]. Several ablation studies and comparisons with other dynamic and static feature selection methods demonstrate the efficacy of our proposed method. Specifically, our dynamic feature selection is able to use as low as 0.28% of the sensor features while still maintaining good human activity monitoring accuracy. Moreover, our dynamically selected features are shown to be interpretable with direct correspondence with different contexts and activity types. 2 METHODOLOGY 2.1 THE `0-NORM MINIMIZATION PROBLEM Many regularization methods have been developed to solve simultaneous feature selection and model parameter estimation [9, 12, 33, 34, 35]. The ideal penalty for the purpose of feature selection is the `0-norm of the model coefficients for all predictors. This norm is equivalent to the number of nonzero terms in all the model coefficients. Given a dataset D containing N independent and identically distributed (iid) input-output pairs {(x1,y1), . . . , (xN ,yN )} with each xi containing P features, a hypothesis class of predictor functions f(·;θ), and a loss function L(ŷ,y) between prediction ŷ and true output y, the `0-norm regularized optimization problem can be written as follows: min θ 1 N ( N∑ i=1 L(f(xi;θ),yi) ) + λ‖θ‖0, (1) where ‖θ‖0 = ∑P j=1 I[θj 6= 0] penalizes the number of nonzero model coefficients. In the models that linearly transform the input features xi, penalizing the weights relating to each feature in xi enables sparse feature subset selection. However, such a selection is static, as it does not adaptively select features that are appropriate for a given context. Moreover, the optimization above is computationally prohibitive as it involves combinatorial optimization to select the subset of nonzero model coefficients corresponding to the input features. In the following, we formulate our adaptive dynamic feature selection problem when learning with multivariate time series. Coupled with training recurrent neural networks, this adaptive feature selection problem is transformed into a sequential context-dependent feature subset selection problem, to which we devise a stochastic relaxation to make the problem tractable. 2.2 DYNAMIC FEATURE SELECTION VIA SEQUENTIAL CONTEXT-DEPENDENT FEATURE SUBSET SELECTION Instead of finding a subset of nonzero model coefficients, an equivalent formulation can be derived by directly selecting the feature subset. Without loss of generality, let z be a binary vector that indicates whether each feature is selected or not. Then, the original `0-norm optimization formulation can be equivalently written as follows: min θ,z 1 N ( N∑ i=1 L(f(xi ◦ z;θ),yi) ) + λ‖z‖0. (2) Compared to the original problem, the penalty on the number of selected features is through the `0-norm of z. This formulation is more flexible, as z can be made dependent on corresponding input features, output labels, or any contextual information, allowing us to formulate our dynamic feature selection problem when learning with multivariate time series data. Specifically, let the input-output pairs (xi,yi) be a pair of time series data of length Ti. At each time t, our model predicts the output yti , as well as the next feature set to select z t i . This optimization problem can be formulated as: min θ,z 1 N ( N∑ i=1 Ti∑ t=1 L(f(x0:t−1i ◦ z 0:t−1 i ;θ),y t i) ) + λ N∑ i=1 Ti∑ t=1 ‖zti‖0. (3) Here, we are tasked to find a set of parameters θ and feature sets zti for each sample i at each time point t to optimize the trade-off between model performance and the number of selected features. The model then uses the parameters and the previously observed features X ti , x 0:t−1 i ◦ z 0:t−1 i to infer the next output yti . However, the above formulation remains intractable, as it involves combinatorial optimization to select the feature subsets at each time point, in addition to the joint optimization of the model parameters and variable selection. Naively, one may also need to solve a separate optimization problem to find zti for each time point during the run time. In the following section, we derive a relaxation based on stochastic optimization parameterizing zti ’s to make the above problem tractable. 2.3 RELAXATION THROUGH STOCHASTIC OPTIMIZATION Instead of finding the exact feature subsets indexed by zti that achieve the optimal regularized objective, one can treat these zti ’s as binary random variables and seek to optimize the distribution π(z|φ) that generates these random variables. For the ease of exposition, we first focus on the relaxation of the non-adaptive formulation in (1) as follows: min θ,φ E(xi,yi)∼D [ Ez∼π(z|φ) [ L(f(xi ◦ z;θ),yi) + λ‖z‖0 ]] . (4) Note that the solution to this problem is equivalent to the original one, as the original combinatorial problem can be recovered by setting π(z|φ) = Bern(φ), a Bernoulli distribution parameterized by φ, and restricting φ ∈ {0, 1}. Using this relaxation, the regularization term can now be evaluated analytically: Ez∼π(z|φ) [ ‖z‖0 ] = Ez∼Bern(φ) [ ‖z‖0 ] = P∑ j=1 π(z|φ)j = P∑ j=1 φj , (5) On the other hand, the outer expectation in (4) can be approximated using minibatches. To extend the above relaxation for time series data, we first note that our adaptive feature selection formulation in (3) allows each time point to have its own feature selection distribution πti(z|φ) , π(z|X t−1 i ,φ) conditioned on previous observationsX t−1i . Letπi(z|φ) be the set ofπti(z|φ) for all t ∈ {1, . . . , Ti}. The stochastic relaxation of the adaptive feature selection formulation can be written as follows: min θ,φ E(xi,yi)∼D [ Ezi∼πi(z|φ) [ Ti∑ t=1 L(f(X t−1i ;θ),y t i) ] + λ Ti∑ t=1 P∑ j=1 πti(z|φ)j ] . (6) 2.4 MODEL PARAMETERIZATION AND DIFFERENTIABLE RELAXATION The difficulty in solving the above problem using gradient descent is that the discrete random variables zti ’s are not directly amenable to stochastic reparameterization techniques. An effective and simple to implement formulation that we adopt is the Gumbel-Softmax reparameterization [36, 37], which relaxes a discrete valued random variable z parameterized by φ to a continuous random variable z̃. Firstly, we can parameterize π(z|X t−1i ,φ) using a vector-valued function σ(X t−1 i ,φ) of the previous observations X t−1i , with φ now being the parameters of σ(·). The distribution can now be rewritten as π(z|X t−1i ,φ) = Bern(σ(X t−1 i ,φ)). With this, the discrete valued random variables z t i can be relaxed into continuous random variables z̃ti as follows: z̃ti = 1 1 + exp (−(logσ(X t−1i ,φ) + L)/τ) . (7) Here, L = log u− log(1− u) is a logistic distribution, where u ∼ Unif(0, 1), and τ is a temperature parameter. For low values of τ , z̃ti approaches a sample of a binary random variable, recovering the original discrete problem, while for high values, z̃ti will equal 1 2 . With this, we are able to compute gradient estimates of z̃ti and approximate the gradient of z t i as∇θ,φzti ≈ ∇θ,φz̃ti . This enables us to backpropagate through the discrete random variables and train the selection parameters along with the model parameters jointly using stochastic gradient descent. Meanwhile, at test time, we sample binary random variables from the learned probabilities. 2.5 MODEL SPECIFICATION To complete our formulation, we specify the model architecture that we use. We have implemented our adaptive dynamic feature selection with a Gated Recurrent Unit (GRU) [38], a type of Recurrent Neural Network (RNN) [39], as shown in Figure 1. Here, we have the previous observations X t−1i being summarized by the hidden state h t−1 i . For adaptive feature selection, the selection distribution is made dependent on ht−1i using a sigmoid of its linear transformation by a weight matrix W as follows: σ(X t−1i ,φ) = SIGMOID(Wh t−1 i ), such that φ = {W}. We note that such a module can be easily integrated into many existing deep architectures and trained from end to end, allowing for task-driven feature selection. For example, the module can be applied to Recurrent Convolutional Neural Networks (RCNN) [40] to selectively determine which convolutional patches/channels to use, or to general feedforward networks to selectively deactivate certain neurons/channels to reduce computation. We have demonstrated this ability by applying it to an Independent RNN [41] benchmarked on the NTU-RGB-D dataset [32], as detailed in Appendix A.4. 3 RELATED WORK Existing HAR systems typically use a fixed set of sensors, potentially collecting redundant features for easily discriminated contexts. Methods that attempt to find a fixed or static feature set often rank feature sets using metrics such as Information Gain [5], or relevancy ranking through a filtering strategy [6, 7, 8]. However, static feature selection can potentially result in collecting redundant information for highly discriminable contexts. Work on dynamic feature selection can be divided into Reinforcement Learning (RL) based and non-RL approaches. Non-RL based approaches vary from assigning certain features to certain activities [21], pre-defining feature subsets for prediction [22, 42], optimizing the trade-off between prediction entropy and the number of selected features [1], to building a metaclassifier for sensor selection [23]. These methods all use immediate rewards to perform feature selection. For predicting long activity sequences, this potentially ignores the information that a feature may have on future predictions, or conversely, overestimate the importance of a feature given previous observations. Among the RL based approaches, some methods attempt to build an MDP to decide which feature to select next or whether to stop acquiring features and make a prediction [24, 25, 26]. These methods condition the choice of one feature on the observation generated by another one, instead of choosing between all sensors simultaneously. Spaan and Lima [27] and Satsangi et al. [28] formulated a Partially Observable MDP (POMDP) using a discretization of the continuous state to model the policy. Yang et al. [2] formulate an RL objective by penalizing the prediction performance by the number of sensors used. Although using a desirable objective, the method employs a greedy maximization process to approximately solve the combinatorial optimization. Moreover, they do not integrate easily with existing deep architectures. Attention is another method worth noting, as it is able to select the most relevant segments of a sequence for the current prediction [43]. Attention modules have been recently used for activity recognition [44]. However, like most attention methods, it requires all of the features to be observed before deciding which features are the most important for prediction. Moreover, the number of instances attended to is not penalized. Finally, soft attention methods typically weight the inputs, instead of selecting the feature subset. Indeed, our experiments on naively applying attention for dynamic feature selection show that it always selects 100% of the features at all times. Sparse regularization has previously been formulated for deep models, e.g., [45, 18, 46], but their focus has primarily been in statically compressing model sizes or reducing overfitting, instead of dynamically selecting features for prediction. In particular, `1 regularization is a common method to promote feature sparsity [9, 10, 11, 12]. Finally, there have been many formulations that propose to solve the issue of backpropagation through discrete random variables [36, 37, 47, 48, 49]. REBAR [47] and RELAX [48] employ REINFORCE and introduce relaxation-based baselines to reduce sample variance of the estimator. However, these baseline functions increase the computation and cause potential conflict between minimizing the sample variance of the gradient estimate and maximizing the expectation objective. Augment-REINFORCE-Merge is a self-control gradient estimator that does not need additional baselines [49]. It provides unbiased gradient estimates that exhibit low variance, but its direct application to autoregressive or sequential setups is not addressed by Yin and Zhou [49] and leads to approximate gradients. Moreover, an exact sequential formulation will require prohibitive computation, squared in sequence length forward passes. 4 EXPERIMENTS Benchmark Datasets and Performance Evaluation We evaluate our model on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset [29], the OPPORTUNITY Dataset [30], the ExtraSensory dataset [31], and the NTU-RGB-D dataset [32]. Although there are many other human activity recognition benchmark datasets [50], we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our adaptive feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations. Second, the OPPORTUNITY dataset contains activity labels that correspond to specific sensors. An optimal adaptive feature selector should primarily choose these sensors under specific contexts with clear physical meaning. Finally, the ExtraSensory dataset studies a multilabel classification problem, where two or more labels can be active at any given time, while the NTU-RGB-D dataset is a complicated activity recognition dataset with over 60 classes of activities using data from 25 skeleton joints. These datasets allow us to benchmark model performance in a complex setting. Due to the page limit, our implementation details and results on the NTU-RGB-D dataset are available in Appendix A and B. We investigate several aspects of our model performance on these benchmarks. To show the effect in prediction accuracy when our selection module is considered, we compare its performance to a standard GRU network [51]. To show the effect of considering dynamic feature selection, we compare a nonadaptive `0 formulation that statically selects features by solving (4) [18]. The performance of our `0 regularized formulation is also benchmarked with an `1 regularized formulation. To benchmark the performance of our differentiable relaxation-based optimization strategy, we implement the Straight-Through estimator [52] and Augment-REINFORCE-Merge (ARM) gradient estimates [49] as alternative methods to optimize our formulation. As stated in the previous section, the fully sequential application of ARM was not addressed in the original paper, and will be prohibitively expensive to compute exactly. Hence, we combine ARM and Straight-Through (ST) estimator [52] as another approach to optimize our formulation. More specifically, we calculate the gradients with respect to the Bernoulli variables with ARM, and use the ST estimator to backpropagate the gradients through the Bernoulli variables to previous layers’ parameters. To further show the importance of considering the sparse regularized formulation, we compare with an attention-based feature selection, selecting features based on the largest attention weights. Because attention yields feature attention weights instead of feature subsets, we select features by using a hard threshold α of the attention weights and scaling the selected features by 1− α for different values of α. Indeed, without this modification, we observe that an attention-based feature selection would select 100% of the features at all times. Finally, we have attempted to implement the dynamic feature selection method by Yang et al. [2] as a distinctly different benchmark. However, without any implementation details provided by the authors, we were not able to reproduce their results. UCI HAR Dataset We first test our proposed method on performing simultaneous prediction and adaptive feature selection on the UCI HAR dataset [29]. This dataset consists of 561 smartphone sensor measurements including various gyroscope and accelerometer readings, with the task of inferring the activity that the user performs at any given time. There are six possible activities that a subject can perform: walking, walking upstairs, walking downstairs, sitting, standing, and laying. We first compare various optimization methods, using stochastic gradients by differential relaxation using Gumbel-Softmax reparametrization, ARM, ST-ARM, Straight-Through gradients, and an `1 regularized formulation to solve adaptive feature selection. The results are provided in Table 1. As shown, Gumbel-Softmax achieves the best prediction accuracy with the least number of features. Utilizing either the Straight Through estimator, ARM, or ST-ARM for gradient estimation cannot provide a better balance between accuracy and efficiency compared with the Gumbel-Softmax relaxation-based optimization. Indeed, the performance of the ST estimator is expected, as there is a mismatch between the forward propagated activations and the backward propagated gradients in the estimator. Meanwhile, we attribute the lower performance of the ARM and ST-ARM optimizer to its use in a sequential fashion, which was not originally considered. The lower performance of the `1 regularized formulation is expected, as `1 regularization is an approximation to the problem of selecting the optimal feature subset. In the following experiments, we have seen similar trends and only report the results from the Gumbel-Softmax based optimization. Benchmarking results of different models are given in Table 2. As shown, our adaptive feature selection model is able to achieve a competitive accuracy using only 0.28% of the features, or on average about 1.57 sensors at any given time. We also observe that both the attention and our adaptive formulation is able to improve upon the accuracy of the standard GRU, suggesting that feature selection can also regularize the model to improve accuracy. Although the attention-based model yields the best accuracy, this comes at a cost of utilizing around 50% of the features at any given time. We study the effect of the regularization weight λ by varying it from λ ∈ {1, 0.1, 0.01, 0.005, 0.001}. We compare this with the attention model by varying the threshold α used to select features from α ∈ {0.5, 0.9, 0.95, 0.99, 0.995, 0.999}, as well as the nonadaptive model by varying its λ from λ ∈ {1000, 100, . . . 0.01, 0.005, 0.001}. A trade-off curve between the number of selected features and the performance for the three models can be seen in Figure 2(b). As shown in the figure, the accuracy of the attention model suffers increasingly with smaller feature subsets, as attention is not a formulation specifically tailored to find sparse solutions. On the other hand, the accuracy of our adaptive formulation is unaffected by the number of features, suggesting that selecting around 0.3% of the features on average may be optimal for the given problem. It further confirms that our adaptive formulation selects the most informative features given the context. The performance of the nonadaptive model is consistent for feature subsets of size 10% or greater. However, it suffers a drop in accuracy for extremely small feature subsets. This shows that for static selection, selecting a feature set that is too large would result in collecting many redundant features for certain contexts, while selecting a feature set that is too small would be insufficient for maintaining accuracy. An example of dynamically selected features can be seen in Figure 2(a). We plot the prediction of our model compared to the true label and illustrate the features that are used for prediction. We also plot a heatmap for the features selected under each activity in Figure 2(c). Note that mainly 5 out of the 561 features are used for prediction at any given time. Observing the selected features, we see that for the static activities such as sitting, standing, and laying, only sensor feature 52 and 63, features relating to the gravity accelerometer, are necessary for prediction. On the other hand, the active states such as walking, walking up, and walking down requires 3 sensor features: sensor 65, 508, and 556, which are related to both the gravity accelerometer and the body accelerometer. This is intuitively appealing as, under the static contexts, the body accelerometer measurements would be relatively constant, and unnecessary for prediction. On the other hand, for the active contexts, the body accelerometer measurements are necessary to reason about how the subject is moving and accurately discriminate between the different active states. Meanwhile, we found that measurements relating to the gyroscope were unnecessary for prediction. UCI OPPORTUNITY Dataset We further test our proposed method on the UCI OPPORTUNITY Dataset [30]. This dataset consists of multiple different label types for human activity, ranging from locomotion, hand gestures, to object interactions. The dataset consists of 242 measurements from accelerometers and Inertial Measurement Units (IMUs) attached to the user, as well as accelerometers attached to different objects with which the user can interact. We use the mid-level gesture activities as the target for our models to predict, which contain gestures related to specific objects, such as opening a door and drinking from a cup. A comparison of the accuracy and the percentage of selected features by different models is given in Table 2, while example predictions and a trade-off curve are constructed and shown in Figures 3(a), 3(b), and 3(c), with a similar trend as the results on the UCI HAR dataset. Notably, the trade-off for the nonadaptive models remains constant for λ ∈ {0.0001, 0.001, . . . , 1}, with a sharp decrease in accuracy for λ ≥ 10. A heatmap for the selected features under each activity is shown in Figure 4. Here, the active sensor features across all activities are features 40 and 42, readings of the IMU attached to the subject’s back, feature 82, readings from the IMU attached to the left upper arm (LUA), and features 230 and 239, location tags that estimate the subject’s position. We posit that these general sensor features are selected to track the subject’s overall position and movements, as they are also predominantly selected in cases with no labels. Meanwhile, sensors 5, 6, and 16, readings from the accelerometer attached to the hip, LUA, and back, are specific to activities involving opening/closing doors or drawers. Interestingly, sensors attached to specific objects, such as accelerometers on doors and cups, are unnecessary for prediction. We attribute this to the severe amount of missing values of these sensors. Indeed, the sensors that have the least amount of missing values are the body sensors and the localization tags. We hypothesize that the model prefers these sensors for their consistent discriminative power on multiple activity types compared to the object specific sensors. In addition to these object specific sensors, 5 IMUs, 9 accelerometers, and 2 localization tags can be completely turned off without significantly affecting prediction performance on this task. ExtraSensory Dataset We further test our proposed method on the ExtraSensory Dataset [31]. This is a multilabel classification dataset, where two or more labels can be active at any given time. It consists of 51 different context labels, and 225 sensor features. We frame the problem as a multilabel binary classification problem, where we have a binary output for each label indicating whether it is active. A comparison of the accuracy and selected features by different models tested can be seen in Table 2. Our method is again competitive with the standard GRU model using less than 12% of all the features. A trade-off curve is shown in Figure 5(b), where we see a similar trend for both adaptive and attention models. However we were unable to obtain a feature selection percentage lower than 25% for the nonadaptive model even with λ as large as 104. We believe that this is because at least 25% of statically selected features are needed; otherwise the nonadaptive model will degrade in performance catastrophically, similar to the OPPORTUNITY dataset results. A heatmap and detailed discussion of the features that our model dynamically selected can be found in Appendix C. The results on these three datasets along with the results on the NTU-RGB-D dataset in Appendix B indicate that our adaptive monitoring framework provides the best trade-off between feature efficiency and accuracy, while the features that it dynamically selects are also interpretable and associated with the actual activity types. 5 CONCLUSIONS We propose a novel method for performing adaptive feature selection by sequential context-dependent feature subset selection, which is cast into a stochastic optimization formulation by modifying the `0 regularized minimization formulation. To make this problem tractable, we perform a stochastic relaxation along with a differentiable reparamaterization, making the optimization amenable to gradient-based optimization with auto-differentiation. We apply this method to human activity recognition by implementing our method to Recurrent Neural Network-based architectures. We benchmark our model on four different activity recognition datasets and have compared it with various adaptive and static feature selection benchmarks. Our results show that our model maintains a desirable prediction performance using a fraction of the sensors or features. The features that our model selected were shown to be interpretable and associated with the activity types. 70% for training, 10% for validation, and 20% for testing. The base model we utilize is a one-layer GRU with 2240 neurons for its hidden state. We use a temperature of 0.05 for the Gumbel-Softmax relaxation. We use the binary cross-entropy of the predicted vs. actual labels as the performance measure, where the model outputs a binary decision for each label, representing whether each label is active or not. We do not include the performance loss for the missing labels and scale the total performance loss of the observed labels for each batch by #timepoints×#total labels#observed labels in labelled timepoints . We optimize this scaled loss with a batch size of 100 using the RMSProp optimizer, setting the learning rate to 10−4 and the smoothing constant to 0.99 for 10000 epochs. We then save both the latest model and the best model validated on the validation set. A.4 NTU-RGB-D DATASET We first preprocess the NTU-RGB-D dataset to remove all the samples with missing skeleton data. We then segment the time-series skeleton data across subjects into 66.5% training, 3.5% validation, and 30% testing sets. The baseline model that we have implemented for the NTU-RGB-D dataset is the Independent RNN [41]. This model consists of stacked RNN modules with several additional dropout, batch normalization, and fully connected layers in between. Our architecture closely follows the densely connected independent RNN of [41]. To incorporate feature selection using either our adaptive formulation or an attention-based formulation, we add an additional RNN to the beginning of this model. This RNN takes as input the 25 different joint features and is tasked to select the joints to use for prediction further along the architecture pipeline. Since the joints are in the form of 3D coordinates, our feature selection method is modified such that it selects either all 3 of the X, Y, and Z coordinates of a particular joint, or none at all. Our architecture can be seen in Figure 6. Similar as the baseline method presented in [41], we have trained this architecture using a batch size of 128 and a sequence length of 20 using the Adam optimizer with a patience threshold of 100 iterations. We then save both the latest model and the best model validated on the validation set. B RESULTS AND DISCUSSION OF THE NTU-RGB-D DATASET We have tested our proposed method on the NTU-RGB-D dataset [32]. This dataset consists of 60 different activities performed by either a single individual or two individuals. The measurements of this dataset are in the form of skeleton data consisting of 25 different 3D coordinates of the corresponding joints of the participating individuals. We compare our method with three different baselines shown in Table 3: the standard independent RNN, a soft attention baseline, and a thresholded attention baseline. We see that our method maintains a competitive accuracy compared to the baseline using less than 50% of the features. On the other hand, because the thresholded attention formulation is not specifically optimized for feature sparsity, we see that it performs significantly worse compared to the other methods. Meanwhile, the softattention slightly improves upon the accuracy of the base architecture. However, as also indicated by our other experiments, soft-attention is not a dynamic feature selection method, and tends to select 100% of the features at all times. A heatmap for the features selected under each activity is shown in Figure 7. Here, we can see that there are two distinct feature sets used for two different types of interactions: single person interactions and two person interactions. Indeed, since the two person activities require sensor measurements from two individuals, the dynamic feature selection would need to prioritize different features to observe their activities as opposed to single person activities. Table 3: Comparison of various methods for activity recognition on the NTU-RGB-D dataset. *Accuracies and average number of features selected are in (%). Method Accuracy Features Selected Adaptive 80.54 49.65 Thresholded attention 40.07 52.31 Soft attention 83.28 100 No selection 83.02 100 ba se o f t he sp in e m id dl e of th e sp in e ne ck he ad le ft sh ou ld er le ft el bo w le ft wr ist le ft ha nd rig ht sh ou ld er rig ht e lb ow rig ht w ris t rig ht h an d le ft hi p le ft kn ee le ft an kl e le ft fo ot rig ht h ip rig ht k ne e rig ht a nk le rig ht fo ot sp in e tip o f t he le ft ha nd le ft th um b tip o f t he ri gh t h an d rig ht th um b drink eat brushing teeth brushing hair drop pickup throw sitting down standing up clapping reading writing tear up paper wear jacket take off jacket wear a shoe take off a shoe wear on glasses take off glasses put on a hat/cap take off a hat/cap cheer up hand waving kicking something reach into pocket hopping jump up answer phone playing with phone typing on a keyboard pointing to something with finger taking a selfie check time rub two hands together nod head/bow shake head wipe face salute put the palms together cross hands in front sneeze staggering falling touch head touch chest touch back touch neck nausea or vomiting condition use a fan punching other person kicking other person pushing other person pat on back of other person point finger at the other person hugging other person giving something to other person touch other person's pocket handshaking walking towards each other walking apart from each other 0.4 0.5 0.6 0.7 0.8 Figure 7: Heatmap of sensor feature activations under each activity state of the NTU-RGB-D dataset. C RESULTS AND DISCUSSION OF THE EXTRASENSORY DATASET A heatmap of the features selected under each activity state can be seen in Figure 8. As shown, there are four groups of sensor features that are used across activities: the phone magnetometer (57-71), watch accelerometer magnitude (85-88), watch accelerometer direction (101-105), and location (138-147). For two particular states, ‘on a bus’ and ‘drinking alcohol’, phone accelerometer measurements (5-52) become necessary for prediction. Some states such as ‘at home’, ‘at main workplace’, and ‘phone in pocket’ are notably sparse in sensor feature usage. We believe that these states are static, and do not require much sensor usage to monitor effectively. Other sensors such as the phone gyroscope, phone state, audio measurements and properties, compass, and various low-frequency sensors are largely unnecessary for prediction in this dataset.
1. What is the main contribution of the paper regarding human activity recognition? 2. What are the strengths and weaknesses of the proposed RNN model for adaptive dynamic feature selection? 3. How does the reviewer assess the practicality and efficiency of the algorithm? 4. Are there any minor comments or questions regarding the paper's content, experiments, or results?
Review
Review This paper proposes an RNN model for adaptive dynamic feature selection, for efficient and interpretable human activity recognition (HAR). From the intuition that human activity can be predictable by using a small number of sensors, the paper introduces an l0-norm minimization problem with parameter regularization, and provide a logic on formulating a dynamic feature selection model with relaxations. The difficulty of the discrete optimization problem is solved by differentiable relaxation, which is known as Gumbel-Softmax reparameterization techniques. The formulation is naturally led to an RNN model that uses histories as input with an additional sigmoid unit for adaptive feature selection. Empirical studies are performed to show the superiority of the adaptive feature selection network. Results are shown on the task of 1) UCI-HAR smartphone dataset with 561 features, 2) UCI Opportunity sensor dataset with 242 features, 3) ExtraSensory dataset with 225 features for multilabel binary classification. In particular, by using the adaptive feature selection technique, the average number of features necessary for HAR prediction can be very small (0.3%, 15.9%, 11.3% among all features) at any given time. Overall, the paper is well written. In particular, analysis results on three datasets are clear and detailed, so that the reader would be available to understand what sensors were necessary for HAR prediction. The key concern about the paper is that the algorithm lacks practicality. To show the adaptive selection algorithm is efficient, it should be shown that the algorithm drastically reduces features that are not necessary for prediction over time, while maintaining the performance even in the lighter feature space. Although the average number of features selected by the adaptive selection algorithm for each snapshot is small, all features are entered as input, which may not help to speed up the algorithm. To claim that the algorithm is efficient, it is required to show that the computation cost can be saved. Also, based on the current experimental results, it is difficult to say that features that were not used in earlier timestamp will not be used in later timestamp with a different context. Minor comments and questions: Can you report the running time of each model? Is this model working in an online setting without tuning? If yes, would you like to clarify? If no, may I think this technique is for maintaining a dashboard that informs important features every time to users by calculating feature importance over time? The performance of the adaptive method on the NTU-RGB-D dataset is quite poor. What part of the dataset do you think caused the difficulty in feature selection? Do all features important? The technical novelty seems to be low if the proposed model is an RNN with an additional sigmoid layer. Figure 2a does not have a ground truth blue line.
ICLR
Title Intrinsic Motivation via Surprise Memory Abstract We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits e cient exploring behaviors and signi cantly boosts the nal performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games. 1 Introduction What motivates agents to explore? Successfully answering this question would enable agents to learn e ciently in formidable tasks. Random explorations such as -greedy are ine cient in high dimensional cases, failing to learn despite training for hundreds of million steps in sparse reward games (Bellemare et al., 2016). Alternative approaches propose to use intrinsic motivation to aid exploration by adding bonuses to the environment's rewards (Bellemare et al., 2016; Stadie et al., 2015). The intrinsic reward is often proportional to the novelty of the visiting state: it is high if the state is novel (e.g. di erent from the past ones (Badia et al., 2020; 2019)) or less frequently visited (Bellemare et al., 2016; Tang et al., 2017). Another view of intrinsic motivation is from surprise, which refers to the result of the experience being unexpected, and is determined by the discrepancy between the expectation (from the gent's prediction) and observed reality (Barto et al., 2013; Schmidhuber, 2010). Technically, surprise is the di erence between prediction and observation representation vectors. The norm of the residual (i.e. prediction error) is used as the intrinsic reward. Here, we will use the terms surprise and surprise norm to refer to the residual vector and its norm, respectively. Recent works have estimated surprise with various predictive models such as dynamics (Stadie et al., 2015), episodic reachability (Savinov et al., 2018) and inverse dynamics (Pathak et al., 2017); and achieved signi cant improvements with surprise norm (Burda et al., 2018a). However, surprise-based agents tend to be overly curious about noisy or unpredictable observations (Itti and Baldi, 2005; Schmidhuber, 1991). For example, consider an agent watching a television screen showing white noise (noisy-TV problem). The TV is boring, yet the agent cannot predict the screen's content and will be attracted to the TV due to its high surprise norm. This distraction or "fake surprise" is common in partially observable Markov Decision Process (POMDP), including navigation tasks and Atari games (Burda et al., 2018b). Many works have addressed this issue by relying on the learning progress (Achiam and Sastry, 2017; Schmidhuber, 1991) or random network distillation (RND) (Burda et al., 2018b). However, the former is computationally expensive, and the latter requires many samples to perform well. This paper overcomes the "fake surprise" issue by using surprise novelty - a new concept that measures the uniqueness of surprise. To identify surprise novelty, the agent needs to compare the current surprise with surprises in past encounters. One way to do this is to equip the agent with some kind of associative memory, which we implement as an autoencoder whose task is to reconstruct a query surprise. The lower the reconstruction error, the lower the surprise novelty. A further mechanism is needed to deal with the rapid changes in surprise structure within an episode. As an example, if the agent meets the same surprise at two time steps, its surprise novelty should decline, and with a simple autoencoder this will not happen. To remedy this, we add an episodic memory, which stores intra-episode surprises. Given the current surprise, this memory can retrieve similar surprises presented earlier in the episode through an attention mechanism. These surprises act as a context added to the query to help the autoencoder better recognize whether the query surprise has been encountered in the episode or not. The error between the query and the autoencoder's output is de ned as surprise novelty, to which the intrinsic reward is set proportionally. We argue that using surprise novelty as an intrinsic reward is better than surprise norm. As in POMDPs, surprise norms can be very large since the agent cannot predict its environment perfectly, yet there may exist patterns of prediction failure. If the agent can remember these patterns, it will not feel surprised when similar prediction errors appear regardless of the surprise norms. An important emergent property of this architecture is that when random observations are presented (e.g., white noise in the noisy-TV problem), the autoencoder can act as an identity transformation operator, thus e ectively passing the noise through to reconstruct it with low error. We conjecture that the autoencoder is able to do this with the surprise rather than the observation as the surprise space has lower variance, and we show this in our paper. To make our memory system work on the surprise level, we adopt an intrinsic motivation method to generate surprise for the memory. The surprise generator (SG) can be of any kind based on predictive models and is jointly trained with the memory to optimize its own loss function. To train the surprise memory (SM), we optimize the memory's parameters to minimize the reconstruction error. Our contribution is to propose a new concept of surprise novelty for intrinsic motivation. We argue that it re ects better the environment originality than surprise norm (see motivating graphics Fig. 1). In our experiments, the SM helps RND (Burda et al., 2018b) perform well in our challenging noisy-TV problem while RND alone performs poorly. Not only with RND, we consistently demonstrate signi cant performance gain when coupling three di erent SGs with our SM in sparse-reward tasks. Finally, in hard exploration Atari games, we boost the scores of 2 strong SGs, resulting in better performance under the low-sample regime. 2 Methods 2.1 Surprise Novelty Surprise is the di erence between expectation and observation (Ekman and Davidson, 1994). If a surprise repeats, it is no longer a surprise. Based on this intuition, we hypothesize that surprises can be characterized by their novelties, and an agent's curiosity is driven by the surprise novelty rather than the surprising magnitude. Moreover, surprise novelty should be robust against noises: it is small even for random observations. For example, watching a random-channel TV can always be full of surprises as we cannot expect which channel will appear next. However, the agent should soon nd it boring since the surprise of random noises reoccurs repeatedly, and the channels are entirely unpredictable. We propose using a memory-augmented neural network (MANN) to measure surprise novelty. The memory remembers past surprise patterns, and if a surprise can be retrieved from the memory, it is not novel, and the intrinsic motivation should be small. The memory can also be viewed as a reconstruction network. The network can pass its inputs through for random, pattern-free surprises, making them retrievable. Surprise novelty has an interesting property: if some event is unsurprising (the expectation-reality residual is −→ 0 ), its surprise ( −→ 0 with norm 0) is always perfectly retrievable (surprise novelty is 0). In other words, low surprise norm means low surprise novelty. On the contrary, high surprise norm can have little surprise novelty as long as the surprise can be retrieved from the memory either through associative recall or pass-through mechanism. Another property is that the variance of surprise is generally lower than that of observation (state), potentially making the learning on surprise space easier. This property is formally stated as follows. Proposition 1. Let X and U be random variables representing the observation and surprise at the same timestep, respectively. Under an imperfect SG, the following inequality holds: ∀i : ( σXi )2 ≥ (σUi )2 where ( σXi )2 and ( σUi )2 denote the i-th diagonal elements of var(X) and var(U), respectively. Proof. See Appendix E. 2.2 Surprise Generator Since our MANN requires surprises for its operation, it is built upon a prediction model, which will be referred to as Surprise Generators (SG). In this paper, we adopt many wellknown SGs (e.g. RND (Burda et al., 2018b) and ICM (Pathak et al., 2017)) to predict the observation, compute the surprise ut and its norm for every step in the environment. The surprise norm is the Euclidean distance between the expectation and the actual observation: ‖ut‖ = ‖SG (It)−Ot‖ (1) where ut ∈ Rn is the surprise vector of size n, It the input of the SG at step t of the episode, SG (It) and Ot the SG's prediction and the observation target, respectively. The input It is speci c to the SG architecture choice, which can be the current (st) or previous state, action (st−1, at). The observation target Ot is usually a transformation (can be identical or random) of the current state st, which serves as the target for the SG's prediction. The SG is usually trained to minimize: LSG = Et [‖ut‖] (2) Here, predictable observations have minor prediction errors or little surprise. One issue is that a great surprise norm can be simply due to noisy or distractive observations. Next, we propose a remedy for this problem. 2.3 Surprise Memory The surprise generated by the SG is stored and processed by a memory network dubbed Surprise Memory (SM). It consists of an episodic memoryM and an autoencoder network W, jointly optimized to reconstruct any surprise. At each timestep, the SM receives a surprise ut from the SG module and reads content u e t from the memoryM. {uet , ut} forms a surprise query qt to W to retrieve the reconstructed q̃t. This reconstruction will be used to estimate the novelty of surprises forming intrinsic rewards rit. Fig. 2 summarizes the operations of the components of our proposed method. Our 2 memory design e ectively recovers surprise novelty by handling intra and inter-episode surprise patterns thanks toM andW, respectively. M can quickly adapt and recall surprises that occur within an episode. W is slower and focuses more on consistent surprise patterns across episodes during training. Here the query qt can be directly set to the surprise ut. However, this ignores the rapid change in surprise within an episode. Without M, when the SG and W are xed (during interaction with environments), their outputs ut and q̃t stay the same for the same input It. Hence, the intrinsic reward rit also stays the same. It is undesirable since when the agent observes the same input at di erent timesteps (e.g., I1 = I2), we expect its curiosity should decrease in the second visit (ri1 <r i 2). Therefore, we design SM withM to x this issue. The episodic memory M stores representations of surprises that the agent encounters during an episode. For simplicity,M is implemented as a rst-in- rst-out queue whose size is xed as N . Notably, the content of M is wiped out at the end of each episode. Its information is limited to a single episode. M can be viewed as a matrix: M ∈ RN×d, where d is the size of the memory slot. We denote M (j) as the j-th row in the memory, corresponding to the surprise ut−j . To retrieve fromM a read-out uet that is close to ut, we perform content-based attention (Graves et al., 2014) to compute the attention weight as wt (j) = (utQ)M(j)> ‖(utQ)‖‖M(j)‖ . The read-out fromM is then u e t = wtMV ∈ Rn. Here, Q ∈ Rn×d and V ∈ Rd×n are learnable weights mapping between the surprise and the memory space. To force the read-out close to ut, we minimize: LM = Et [‖uet − ut‖] (3) The read-out and the SG's surprise form the query surprise to W: qt = [uet , ut] ∈ R2n. M stores intra-episode surprises to assist the autoencoder in preventing the agent from exploring fake surprise within the episode. Since we train the parameters to reconstruct ut using past surprises in the episode, if the agent visits a state whose surprise is predictable from those in M, ‖uet − ut‖ should be small. Hence, the read-out context uet contains no extra information than ut and reconstructing qt fromW becomes easier as it is equivalent to reconstructing ut. In contrast, visiting diverse states leads to a more novel read-out u e t and makes it more challenging to reconstruct qt, generally leading to higher intrinsic reward. The autoencoder network W can be viewed as an associative memory of surprises that persist across episodes. At timestep t in any episode during training, W is queried with qt to produce a reconstructed memory q̃t. The surprise novelty is then determined as: rit = ‖q̃t − qt‖ (4) which is the norm of the surprise residual q̃t − qt. It will be normalized and added to the external reward as an intrinsic reward bonus. The details of computing and using normalized intrinsic rewards can be found in Appendix C. We implementW as a feed-forward neural network that learns to reconstruct its own inputs. This kind of autoencoder has been shown to be equivalent to an associative memory that supports memory encoding and retrieval through attractor dynamics (Radhakrishnan et al., 2020). The query surprise is encoded to the weights of the network via backpropagation as we minimize the reconstruction loss below: LW = Et [ rit ] = Et [‖W (qt)− qt‖] (5) Here, q̃t = W (qt). Intuitively, it is easier to retrieve non-novel surprises experienced many times in past episodes. Thus, the intrinsic reward is lower for states that leads to these familiar surprises. On the contrary, rare surprises are harder to retrieve, which results in high reconstruction errors and intrinsic rewards. W is like a long-term inter-episode associative memory. Unlike slot-based memories, it has a xed memory capacity, can compress information and learn data representations. We could store the surprise in a slot-based memory across episodes, but the size of this memory would be autonomous, and the data would be stored redundantly. Hence, the quality of the stored surprise will reduce as more and more observations come in. Readers can refer to Appendix A to see the architecture details and how W can be interpreted as implementing associative memory. The whole system SG+SM is trained end-to-end by minimizing the following loss: L = LSG +LM +LW . Here, we block the gradients from LW backpropagated to the parameters of SG to avoid trivial reconstructions of qt. Pseudocode of our algorithm is presented in Appendix B. 3 Experimental Results 3.1 Noisy-TV: Robustness against Noisy Observations We use Noisy-TV, an environment designed to fool exploration methods (Burda et al., 2018b; Savinov et al., 2018), to con rm that our method can generate intrinsic rewards that (1) are more robust to noises and (2) can discriminate rare and common observations through surprise novelty. We simulate this problem by employing a 3D maze environment with a random map structure. The TV is not xed in speci c locations in the maze to make it more challenging. Instead, the agent brings the TV with it and can choose to watch TV anytime. Hence, there are three basic actions (turn left, right, and move forward) plus an action: watch TV. When taking this action, the agent will see a white noise image sampled from standard normal distribution and thus, the number of TV channels can be considered in nity. The agent's state is an image of its viewport, and its goal is to search for a red box randomly placed in the maze (+1 reward if the agent reaches the goal). The baseline is RND (Burda et al., 2018b), a simple yet strong SG that is claimed to obviate the stochastic problems of Noisy-TV. Our SG+SM model uses RND as the SG, so we name it RND+SM. Since our model and the baseline share the same RND architecture, the di erence in performance must be attributed to our SM. Fig. 3 (a) illustrates the mean-normalized intrinsic rewards (MNIR)1 measured at di erent states in our Noisy-TV environment. The rst two states are noises, the following three states are common walls, and the last two are ones where the agent sees the box. The 1See Appendix C for more information on this metric. MNIR bars show that both models are attracted mainly by the noisy TV, resulting in the highest MNIRs. However, our model with SM su ers less from noisy TV distractions since its MNIR is lower than RND's. We speculate that SM is able to partially reconstruct the whitenoise surprise via pass-through mechanism, making the normalized surprise novelty generally smaller than the normalized surprise norm in this case. That mechanism is enhanced in SM with surprise reconstruction (see Appendix D.1 for explanation). On the other hand, when observing red box, RND+SM shows higher MNIR than RND. The di erence between MNIR for common and rare states is also more prominent in RND+SM than in RND because RND prediction is not perfect even for common observations, creating relatively signi cant surprise norms for seeing walls. The SM xes that issue by remembering surprise patterns and successfully retrieving them, producing much smaller surprise novelty compared to those of rare events like seeing red box. Consequently, the agent with SM outperforms the other by a massive margin in task rewards (Fig. 3 (b)). As we visualize the number of watching TV actions and the value of the intrinsic reward by RND+SM and RND over training time, we realize that RND+SM helps the agent take fewer watching actions and thus, collect smaller amounts of intrinsic rewards compared to RND. We also verify that our proposed method outperforms a simpli ed version of SM using counts to measure surprise novelty and a vanilla baseline that does not use intrinsic motivation. The details of these results are given in Appendix D.1. 3.2 MiniGrid: Compatibility with Different Surprise Generators We show the versatility of our framework SG+SM by applying SM to 4 SG backbones: RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and autoencoderAE (see Appendix D.2 for implementation details). We test the models on three tasks from MiniGrid environments: Key-Door (KD), Dynamic-Obstacles (DO) and Lava-Crossing (LC) (Chevalier-Boisvert et al., 2018). If the agent reaches the goal in the tasks, it receives a +1 reward. Otherwise, it can be punished with negative rewards if it collides with obstacles or takes too much time to nish the task. These environments are not stochastic as the Noisy-TV but they still contain other types of distraction. For example, in KD, the agent can be attracted to irrelevant actions such as going around to drop and pick the key. In DO, instead of going to the destination, the agent may chase obstacle balls ying around the map. In LC the agent can commit unsafe actions like going near lava areas, which are di erent from typical paths. In any case, due to reward sparsity, intrinsic motivation is bene cial. However, surprise alone may not be enough to guide an e cient exploration since the observation can be too complicated for SG to minimize its prediction error. Thus, the agent quickly feels surprised, even in unimportant states. Table 1 shows the average returns of the models for three tasks. The Baseline is the PPO backbone trained without intrinsic reward. RND, ICM, NGU and AE are SGs providing the PPO with surprise-norm rewards while our method SG+SM uses surprise-novelty rewards. The results demonstrate that models with SM often outperform SG signi cantly and always contain the best performers. Notably, in the LC task, SGs hinder the performance of the Baseline because the agents are attracted to dangerous vivid states, which are hard to predict but cause the agent's death. The SM models avoid this issue and outperform the Baseline for the case of ICM+SM. Compared to AE, which computes intrinsic reward based on the novelty of the state, AE+SM shows a much higher average score in all tasks. That manifests the importance of modeling the novelty of surprise instead of states. To analyze the di erence between the SG+SM and SG's MNIR structure, we visualize the MNIR for each cell in the map of Key-Door in Appendix's Figs. 5 (b) and (c). We create a synthetic trajectory that scans through all the cells in the big room on the left and, at each cell, uses RND+SM and RND models to compute the corresponding surprise-norm and surprise-novelty MNIRs, respectively. As shown in Fig. 5 (b), RND+SM selectively identi es truly surprising events, where only a few cells have high surprise-novelty MNIR. Here, we can visually detect three important events that receive the most MNIR: seeing the key (bottom row), seeing the door side (in the middle of the rightmost column) and approaching the front of the door (the second and fourth rows). Other less important cells are assigned very low MNIR. On the contrary, RND often gives high surprise-norm MNIR to cells around important ones, which creates a noisy MNIR map as in Fig. 5 (c). As a result, RND's performance is better than Baseline, yet far from that of RND+SM. Another analysis of how surprise novelty discriminates against surprises with similar norms is given in Appendix's Fig. 8. 3.3 Atari: Sample-efficient Benchmark We adopt the sample-e ciency Atari benchmark (Kim et al., 2019) on six hard exploration games where the training budget is only 50 million frames. We use our SM to augment 2 SGs: RND (Burda et al., 2018b) and LWM (Ermolov and Sebe, 2020). Unlike RND, LWM uses a recurrent world model and forward dynamics to generate surprises. Details of the SGs, training and evaluation are in Appendix D.3. We run the SG and SG+SM in the same codebase and setting. Table 2 reports our and representative results from prior works, showing SM-augmented models outperform their SG counterparts in all games (same codebase). In Frostbite and Montezuma Revenge, RND+SM's score is almost twice as many as that of RND. For LWM+SM, games such as Gravitar and Venture observe more than 40% improvement. Overall, LWM+SM and RND+SM achieve the best mean and median human normalized score, improving 16% and 22% w.r.t the best SGs, respectively. Notably, RND+SM shows signi cant improvement for the notorious Montezuma Revenge. We also verify the bene t of the SM in the long run for Montezuma Revenge and Frostbite. As shown in Fig. 4 (a,b), RND+SM still signi cantly outperforms RND after 200 million training frames, achieving average scores of 10,000 and 9,000, respectively. The result demonstrates the scalability of our proposed method. When using RND and RND+SM to compute the average MNIR in several rooms in Montezuma Revenge (Fig. 1), we realize that SM makes MNIR higher for surprising events in rooms with complex structures while depressing the MNIR of fake surprises in dark rooms. Here, even in the dark room, the movement of agents (human or spider) is hard to predict, leading to a high average MNIR. On the contrary, the average MNIR of surprise novelty is reduced if the prediction error can be recalled from the memory. Finally, measuring the running time of the models, we notice little computing overhead caused by our SM. On our Nvidia A100 GPUs, LWM and LWM+SM's average time for one 50M training are 11h 38m and 12h 10m, respectively. For one 200M training, RND and RND+SM's average times are 26h 24m and 28h 1m, respectively. These correspond to only 7% more training time while the performance gap is signi cant (4000 scores). 3.4 Ablation Study Role of Memories Here, we use Minigrid's Dynamic-Obstacle task to study the role of M and W in the SM (built upon RND as the SG). Disabling W, we directly use ‖qt‖ = ‖[uet , ut]‖ as the intrinsic reward, and name this version: SM (no W). To ablate the e ect ofM, we remove uet from qt and only use qt = ut as the query to W, forming the version: SM (no M). We also consider di erent episodic memory capacity and slot size N -d= {32− 4, 128− 16, 1024− 64}. As N and d increase, the short-term context expands and more past surprise information is considered in the attention. In theory, a bigM is helpful to capture long-term and more accurate context for constructing the surprise query. Fig. 4 (c) depicts the performance curves of the methods after 10 million training steps. SM (no W) and SM (noM) show weak signs of learning, con rming the necessity of both modules in this task. Increasing N -d from 32−4 to 1024−64 improves the nal performance. However, 1024− 64 is not signi cantly better than 128− 16, perhaps because it is unlikely to have similar surprises that are more than 128 steps apart. Thus, a larger attention span does not provide a bene t. As a result, we keep using N = 128 and d = 16 in all other experiments for faster computing. We also verify the necessity ofM and W in Montezuma Revenge and illustrate how M generates lower MNIR when 2 similar event occurs in the same episode in Key-Door (see Appendix D.4). No Task Reward In this experiment, we remove task rewards and merely evaluate the agent's ability to explore using intrinsic rewards. The task is to navigate 3D rooms and get a +1 reward for picking an object (Chevalier-Boisvert, 2018). The state is the agent's image view, and there is no noise. Without task rewards, it is crucial to maintain the agent's interest in unique events of seeing the objects. In this partially observable environment, surprise-prediction methods may struggle to explore even without noise due to lacking information for good predictions, leading to usually high prediction errors. For this testbed, we evaluate random exploration agent (Baseline), RND and RND+SM in 2 settings: 1 room with three objects (easy), and 4 rooms with one object (hard). To see the di erence among the models, we compare the cumulative task rewards over 100 million steps (see Appendix D.4 for details). RND is even worse than Baseline in the easy setting because predicting causes high biases (intrinsic rewards) towards the unpredictable, hindering exploration if the map is simple. In contrast, RND+SM uses surprise novelty, generally showing smaller intrinsic rewards (see Appendix Fig. 12 (right)). Consequently, our method consistently demonstrates signi cant improvements over other baselines (see Fig. 4 (d) for the hard setting). 4 Related works Intrinsic motivation approaches usually give the agent reward bonuses for visiting novel states to encourage exploration. The bonus is proportional to the mismatch between the predicted and reality, also known as surprise (Schmidhuber, 2010). One kind of predictive model is the dynamics model, wherein the surprise is the error of the models as predicting the next state given the current state and action (Achiam and Sastry, 2017; Stadie et al., 2015). One critical problem of these approaches is the unwanted bias towards transitions where the prediction target is a stochastic function of the inputs, commonly found in partially observable environments. Recent works focus on improving the features of the predictor's input by adopting representation learning mechanisms such as inverse dynamics (Pathak et al., 2017), variational autoencoder, random/pixel features (Burda et al., 2018a), or whitening transform (Ermolov and Sebe, 2020). Although better representations may improve the reward bonus, they cannot completely solve the problem of stochastic dynamics and thus, fail in extreme cases such as the noisy-TV problem (Burda et al., 2018b). Besides dynamics prediction, several works propose to predict other quantities as functions of the current state by using autoencoder (Nylend, 2017), episodic memory (Savinov et al., 2018), and random network (Burda et al., 2018b). Burda et al. (2018) claimed that using a deterministic random target network is bene cial in overcoming stochasticity issues. Other methods combine this idea with episodic memory and other techniques, achieving good results in large-scale experiments (Badia et al., 2020; 2019). From an information theory perspective, the notation of surprise can be linked to information gain or uncertainty, and predictive models can be treated as parameterized distributions (Achiam and Sastry, 2017; Houthooft et al., 2016; Still and Precup, 2012). Furthermore, to prevent the agent from unpredictable observations, the reward bonus can be measured by the progress of the model's prediction (Achiam and Sastry, 2017; Lopes et al., 2012; Schmidhuber, 1991). However, these methods are complicated and hard to scale, requiring heavy computing. A di erent angle to handle stochastic observations during exploration is surprsie minimization (Berseth et al., 2020; Rhinehart et al., 2021). In this direction, the agents get bigger rewards for seeing more familiar states. Such a strategy is somewhat opposite to our approach and suitable for unstable environments where the randomness occurs separately from the agents' actions. These earlier works rely on the principle of using surprise as an incentive for exploration and di er from our principle that utilizes surprise novelty. Also, our work augments these existing works with a surprise memory module and can be used as a generic plug-in improvement for surprise-based models. We note that our memory formulation di ers from the memorybased novelty concept using episodic memory (Badia et al., 2019), momentum memory (Fang et al., 2022), or counting (Bellemare et al., 2016; Tang et al., 2017) because our memory operates on the surprise level, not the state level. In our work, exploration is discouraged not only in frequently visited states but also in states whose surprises can be reconstructed using SM. Our work provides a more general and learnable novelty detection mechanism, which is more exible than the nearest neighbour search or counting lookup table. 5 Discussion This paper presents Surprise Generator-Surprise Memory (SG+SM) framework to compute surprise novelty as an intrinsic motivation for the reinforcement learning agent. Exploring with surprise novelty is bene cial when there are repeated patterns of surprises or random observations. For example, in the Noisy-TV problem, our SG+SM can harness the agent's tendency to visit noisy states such as watching random TV channels while encouraging it to explore rare events with distinctive surprises. We empirically show that our SM can supplement three surprise-based SGs to achieve more rewards in fewer training steps in three grid-world environments. In 3D navigation without external reward, our method signi cantly outperforms the baselines. On two strong SGs, our SM also achieve superior results in hard-exploration Atari games within 50 million training frames. Even in the long run, our method maintains a clear performance gap from the baselines, as shown in Montezuma Revenge and Frostbite. If we view surprise as the rst-order error between the observation and the predicted, surprise novelty the retrieval error between the surprise and the reconstructed memory, is essentially the second-order error. It would be interesting to investigate the notion of higher-order errors, study their theoretical properties, and utilize them for intrinsic motivation in our future work. A W as Associative Memory This section will connect the associative memory concept to neural networks trained with the reconstruction loss as in Eq. 5. We will show how the neural network (W) stores and retrieves its data. We will use 1-layer feed-forward neural network W to simplify the analysis, but the idea can extend to multi-layer feed-forward neural networks. For simplicity, assumingW is a square matrix, the objective is to minimize the di erence between the input and the output of W : For simplicity, assuming W is a square matrix, the objective is to minimize the di erence between the input and the output of W : L = ‖Wx− x‖22 (6) Using gradient descent, we update W as follow, W ←W − α ∂L ∂W ←W − 2α (Wx− x)x> ←W − 2αWxx> + 2αxx> ←W ( I − 2αxx> ) + 2αxx> where I is the identity matrix, x is the column vector. If a batch of inputs {xi}Bi=1 is used in computing the loss in Eq. 6, at step t, we update W as follows, Wt =Wt−1 (I − αXt) + αXt where Xt = 2 ∑B i=1 xix > i . From t = 0, after T updates, the weight becomes WT =W0 T∏ t=1 (I − αXt)− α2 T∑ t=2 XtXt−1 T∏ k=t+1 (I − αXk) + α T∑ t=1 Xt (7) Given the form of Xt, Xt is symmetric positive-de nite. Also, as α is often very small (0<α 1), we can show that ‖I − αXt‖ < 1 − λmin (αXt) < 1. This means as T →∞, ∥∥∥W0∏Tt=1 (I − αXt)∥∥∥→ 0 and thus, WT → α2∑Tt=2XtXt−1∏Tk=t+1 (I − αXk) + α ∑T t=1Xt independent from the initialization W0. Eq. 7 shows how the data (Xt) is integrated into the neural network weight Wt. The other components such as α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk) can be viewed as additional encoding noise. Without these components (by assuming α is small enough), WT ≈ α T∑ t=1 Xt = α T∑ t=1 B∑ i=1 xi,tx > i,t or equivalently, we have the Hebbian update rule: W ←W + xi,t ⊗ xi,t where W can be seen as the memory, ⊗ is the outer product and xi,t is the data or item stored in the memory. This memory update is the same as that of classical associative memory models such as Hop eld network and Correlation Matrix Memory (CMM) . Given a query q, we retrieve the value in W as output of the neural network: q′ = q>W = q>R+ α T∑ t=1 qXt = q>R+ 2α T∑ t=1 B∑ i=1 q>xi,tx > i,t where R = W0 ∏T t=1 (I − αXt) − α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk). If q is presented to the memory W in the past as some xj , q ′ can be represented as: q′ = q>R+ 2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t + 2αq > (qq>) = q>R︸︷︷︸+2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t︸ ︷︷ ︸+2α ‖q‖ q > noise cross talk Assuming that the noise is insigni cant thanks to small α, we can retrieve exactly q given that all items in the memory are orthogonal2. As a result, after scaling q′ with 1/2α, the retrieval error ( ∥∥∥ q′2α − q∥∥∥) is 0. If q is new to W , the error will depend on whether the items stored in W are close to q. Usually, the higher the error, the more novel q is w.r.t W . B SM's Implementation Detail In practice, the short-term memoryM is a tensor of shape [B,N, d] where B is the number of actors, N the memory length and d the slot size. B is the SG's hyperparameters and tuned depending on tasks based on SG performance. For example, for the Noisy-TV, we tune RND as the SG, obtaining B = 64 and directly using them for M. N and d are the special hyperparameters of our method. As mentioned in Sec. 3.4, we x N = 128 and d = 16 in all experiments. As B increases in large-scale experiments, memory storage for M can be demanding. To overcome this issue, we can use the uniform writing trick to optimally preserve information while reducing N (Le et al., 2019). Also, for W, by using a small hidden size, we can reduce the requirement for physical memory signi cantly. Practically, in all experiments, we implement W as a 2-layer feedforward neural network with a hidden size of 32 (2n → 32 → 2n). The activation is tanh. With n = 512 d = 16, the number of parameters of W is only about 65K. Also, Q ∈ Rn×d and V ∈ Rd×n have about 8K parameters. In total, our SM only introduces less than 90K trainable parameters, which are marginal to that of the SG and policy/value networks (up to 10 million parameters). The join training of SG+SM is presented in Algo. 2. We note that vector notations in the algorithm are row vectors. For simplicity, the algorithm assumes 1 actor. In practice, our algorithm works with multiple actors and mini-batch training. C Intrinsic Reward Normalization Following (Burda et al., 2018b), to make the intrinsic reward on a consistent scale, we normalized the intrinsic reward by dividing it by a running estimate of the standard deviations 2By certain transformation, this condition can be reduced to linear independence Algorithm 1 Intrinsic rewards computing via SG+SM framework. Require: ut, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Compute LSG = ‖ut‖ 2: QueryM with ut, retrieve uet = wtMV where wt is the attention weight 3: Compute LM = ‖uet − ut.detach()‖ 4: Query W with qt = [uet , ut], retrieve q̃t =W(qt) 5: Compute intrinsic reward rit = LW = ‖q̃t − qt.detach()‖ 6: return LSG, LM, LW Algorithm 2 Jointly training SG+SM and the policy. Require: bu er, policy πθ, surprise-based predictor SG, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Initialize πθ, SG, Q, W 2: for iteration = 1, 2, ... do 3: for t = 1, 2, ...T do 4: Execute policy πθ to collect st, at, rt, forming input It = st, ... and target Ot 5: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 6: Compute intrinsic reward rit using Algo. 1 7: Compute nal reward rt ← rt + βrit/rstdt 8: Add (It, Ot, st−1, st, at, rt) to bu er 9: Add utQ toM 10: if done episode then clearM 11: end for 12: for k = 1, 2, ..,K do 13: Sample It, Ot from bu er 14: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 15: Compute LSG, LM, LW using Algo. 1 16: Update SG, Q and W by minimizing the loss L = LSG + LM + LW 17: Update πθ with sample (st−1, st, at, rt) from bu er using backbone algorithms 18: end for 19: end for of the intrinsic returns. This normalized intrinsic reward (NIR) will be used for training. In addition, there is a hyperparameter named intrinsic reward coe cient to scale the intrinsic contribution relatively to the external reward. We denote the running mean's standard deviations and intrinsic reward coe cient as rstdt and β, respectively, in Algo. 2. In our experiments, if otherwise stated, β = 1. We note that when comparing the intrinsic reward at di erent states in the same episode (as in the experiment section), we normalize intrinsic rewards by subtracting the mean, followed by a division by the standard deviation of all intrinsic rewards in the episode. Hence, the mean-normalized intrinsic reward (MNIR) in these experiments is di erent from the one used in training and can be negative. We argue that normalizing with mean and std. of the episode's intrinsic rewards is necessary to make the comparison reasonable. For example, in an episode, method A assigns all steps with intrinsic rewards of 200; and method B assigns novel steps with intrinsic rewards of 1 while others 0. Clearly, method A treats all steps in the episode equal, and thus, it is equivalent to giving no motivation for all of the steps in the episode (the learned policy will not motivate the agent to visit novel states). On the contrary, method B triggers motivation for novel steps in the episodes (the learned policy will encourage visits to novel states). Without normalizing by mean subtraction, it is tempting to conclude that the relative intrinsic reward of method A for a novel step is higher, which is technically incorrect. D Experimental Details D.1 Noisy-TV We create the Noisy-TV environment by modifying the Maze environment (MazeS3Fast-v0) in the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The backbone RL algorithm is PPO. We adopt a public code repository for the implementation of PPO and RND (MIT License)3. In this environment, the state is an image of the agent's viewport. The details of architecture and hyperparameters of the backbone and RND is presented in Table 4. Most of the setting is the same as in the repository. We only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) to suit our hardware and the task. After tuning with RND, we use the same setting for our RND+SM. Fig. 6 reports all results for this environment. Fig. 6 (a) compares the nal intrinsic reward (IR) generated by RND and RND+SM over training time. Overall, RND's IR is always higher than RND+SM's, indicating that our method is signi cantly reduces the attention of the agent to the noisy TV by assigning less IR to watching TV. Fig. 6 (b) compares the number of noisy actions between two methods where RND+SM consistently shows fewer watching TV actions. That con rms RND+SM agent is less distracted by the TV. As mentioned in the main text, RND+SM is better at handling noise than RND. Note that RND aims to predict the transformed states by minimizing ‖SG (st)− fR(st)‖ where fR is a xed neural network initialized randomly. If RND can learns the transformation, it can passthrough the state, which is similar to reconstruction in an autoencoder. However, learning fR can be harder and require more samples than learning an identity transformation since fR is non-linear and complicated. Hence, it may be more challenging for RND to pass-through the noise than SM. Another possible reason lies in the operating space (state vs. surprise). If we treat white noise as a random variable X, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ] − X where Y is a random factor that a ects the training of the surprise generator. The factor Y makes the SG produce imperfect reconstruction E [X|Y ]4. Here, SG and SM learn to reconstruct X and U , respectively. We can prove that the variance of each feature dimension in U is smaller than that of X (see Sec. E). Learning an autoencoder on surprise space is more bene cial than in state space since the data has less variance and thus, it may require less data points to learn the data distribution. Fig. 6 (c) reports performance of all baselines. Besides RND and RND+SM, we also include PPO without intrinsic reward as the vanilla Baseline for reference. In addition, we investigate a simple implementation of SM using count-based method to measure surprise novelty. Concretely, we use SimHash algorithm to count the number of surprise c(ut) in a similar manner as (Bellemare et al., 2016) and name the baseline RND+SM (count). The 3https://github.com/jcwleo/random-network-distillation-pytorch 4In this case, the perfect reconstruction is E [X] intrinsic reward is then β/ √ c(ut). We tune the hyperparameter β = {0.5, 1, 5} and the hash matrix size kh = {32, 64, 128, 256} and use the same normalization and training process to run this baseline. We report the learning curves of the best variant with β = 0.5 and kh = 128. The result demonstrates that the proposed SM using memory-augmented neural networks outperforms the count-based SM by a signi cant margin. One possible reason is that count-based method cannot handle white noise: it always returns high intrinsic rewards. In contrast, our SM can somehow reconstruct white noise via pass-through mechanism and thus reduces the impact of fake surprise on learning. Also, the proposed SM is more exible than the count-based counterpart since it learns to reconstruct from the data rather than using a x counting scheme. The result also shows that RND+SM outperforms the vanilla Baseline. Although the improvement is moderate (0.9 vs 0.85), the result is remarkable since the Noisy-TV is designed to fool intrinsic motivation methods and among all, only RND+SM can outperform the vanilla Baseline. D.2 MiniGrid The tasks in this experiment are from the MiniGrid library (Apache License) (ChevalierBoisvert et al., 2018). In MiniGrid environments, the state is a description vector representing partial observation information such as the location of the agents, objects, moving directions, etc. The three tasks use hardest maps: • DoorKey: MiniGrid-DoorKey-16x16-v0 • LavaCrossing: MiniGrid-LavaCrossingS11N5-v0 • DynamicObstacles: MiniGrid-Dynamic-Obstacles-16x16-v0 The SGs used in this experiment are RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and AE. Below we describe the input-output structure of these SGs. • RND: It = st and Ot = fR (st) where st is the current state and fR is a neural network that has a similar structure as the prediction network, yet its parameters are initialized randomly and xed during training. • ICM: It = (st−1, at) and Ot = st where s is the embedding of the state and a the action. We note that in addition to the surprise loss (Eq. 2), ICM is trained with inverse dynamics loss. • NGU: This agent reuses the RND as the SG (It = st and Ot = fR (st)) and combines the surprise norm with an KNN episodic reward. When applying our SM to NGU, we only take the surprise-based reward as input to the SM. The code for NGU is based on this public repository https://github.com/opendilab/DI-engine. • AE: It = st and Ot = st where s is the embedding of the state. This SG can be viewed as an associative memory of the observation, aiming to remember the states. This baseline is designed to verify the importance of surprise modeling. Despite sharing a similar architecture, it di ers from our SM, which operates on surprise and have an augmented episodic memory to support reconstruction. The backbone RL algorithm is PPO. The code for PPO and RND is the same as in Sec. D.1. We adopt a public code repository for the implementation of ICM (MIT License)5. We implement AE ourselves using a 3-layer feed-forward neural network. For the SGs, we only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) for the DoorKey task. We also tune the architecture of the AE (number of layers: 1,2 or 3, activation tanh or ReLU) on the DoorKey task. After tuning the SGs, we use the same setting for our SG+SM. The detailed con gurations of the SGs for this experiment are reported in Table 3 and Table 4. The full learning curves of the backbone (Baseline), SG and SG+SM are given in Fig. 7. To visualize the di erence between surprise and surprise residual vectors, we map these in the synthetic trajectory to 2-dimensional space using t-SNE projection in Fig. 8. The surprise points show clustered patterns for high-MNIR states, which con rms our hypothesis that there exist familiar surprises (they are highly surprising due to high norm, yet repeated). In contrast, the surprise residual estimated by the SM has no high-MNIR clusters. The SM transforms clustered surprises to scatter surprise residuals, resulting in a broader range of MNIR, thus showing signi cant discrimination on states that have similar surprise norm. D.3 Atari The Atari 2600 Games task involves training an agent to achieve high game scores. The state is a 2d image representing the screen of the game. 5https://github.com/jcwleo/curiosity-driven-exploration-pytorch Surprise Surprise Residual SG and RL backbone implementations We use 2 SGs: RND and LWM. RND uses a PPO backbone as in previous sections. On the other hand, LWM uses DQN backbone with CNN-based encoder and GRU-based value function. The LWM SG uses GRU to model forward dynamics of the environment and thus its input is: It = (st−1, at, ht−1) where st−1 is the embedding of the previous state, at the current action, and ht−1 the hidden state of the world model GRU. The target Ot is the embedding of the current state st. RND follows the same implementation as in previous experiments. We use the public code of LWM provided by the authors6 to implement LWM. The hyperparameters of RND and LWM are tuned by the repository's owner (see Table 4 for RND and refer to the code or the original paper (Ermolov and Sebe, 2020) for the details of LWM implementation). We augment them with our SM of default hyperparameters N = 128, d = 16. Training and evaluation We follow the standard training for Atari games, such as stacking four frames and enabling sticky actions. All the environments are based on OpenAI's gym-atari's NoFrameskip-v4 variants (MIT Liscence)7 . After training, we evaluate the models by measuring the average return over 128 episodes and report the results in Table. 2. Depending on the setting, the models are trained for 50 or 200 million frames. Results Fig. 9 demonstrates the learning curves of all models in 6 Atari games under the low-sample regime. LWM+SM and RND+SM clearly outperfrom LWM and RND in Frostbite, Venture, Gravitar, Solaris and Frostbite, Venture, Gravitar and MontezumaRevenge, respectively. Table 5 reports the results of more baselines. D.4 Ablation study Role of Memories We conduct more ablation studies to verify the need for the shortM and long-term (W) memory in our SM. We design additional baselines SM (no W) and SM (no M) (see Sec. 3.4), and compare them with the SM with full features in Montezuma Revenge and Frostbite task. Fig. 10 (a) shows that only SM (full) can reach an average score of more than 5000 after 50 million training frames. Other ablated baselines can only achieve around 2000 scores. 6https://github.com/htdt/lwm 7https://github.com/openai/gym We also shows the impact of the episodic memory in decreasing the intrinsic rewards for similar states as discussed in Sec. 2.3. We select 3 states in the MiniGrid's KeyDoor task and computes the MNIR for each state, visualized in Fig. 11. At the step-1 state, the MNIR is low since there is nothing special in the view of the agent. At the step-15 state, the agent rst sees the key, and get a high MNIR. At the step-28 state, the agent drops the key and sees the key again. This event is still more interesting than the step-1 state. However, the view is similar to the one in step 15, and thus, the MNIR decreases from 0.7 to 0.35 as expected. No Task Reward The tasks in this experiment are from the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The two tasks are: • Easy: MiniWorld-PickupObjs-v0 • Hard: MiniWorld-FourRooms-v0 The backbone and SG are the same as in Sec. D.1. We remove the task/external reward in this experiment. For the Baseline, without task reward, it receives no training signal and thus, showing a similar behavior as a random agent. Fig. 12 illustrates the running average of cumulative task return and the intrinsic reward over training steps. In the Easy mode, the random Baseline can even perform better than RND, which indicates that biased intrinsic reward is not always helpful. RND+SM, in both modes, shows superior performance, con rming that its intrinsic reward is better to guide the exploration than that of RND. E Theoretical Property of Surprise Space's Variance Let X be a random variable representing the observation at some timestep, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ]−X where Y is a random factor that a ect the prediction of SG and makes it produce imperfect reconstruction E [X|Y ] instead of E [X]. For instance, in the case of an autoencoder AE as the SG, X and U are stand AE(st)− st, respectively. Let us denote Z = E (X|Y ), then E [Z|Y ] = Z and E [ Z2|Y ] = Z2. We have var (X) = var(X − Z + Z) = var(X − Z) + var(Z) + 2cov(X − Z,Z) = var(X − Z) + var(Z) + 2E[(X − Z)Z]− 2E[X − Z]E[Z] Using the Law of Iterated Expectations, we have E[X − Z] = E[E[X − Z|Y ]] = E[E [X|Y ]− E [Z|Y ]] = E [Z − Z] = 0 and E[(X − Z)Z] = E[E[(X − Z)Z|Y ]] = E[E[XZ − Z2|Y ]] = E[E (XZ|Y )− E ( Z2|Y ) ] = E[ZE (X|Y )− Z2] = E[Z2 − Z2] = 0 Therefore, var (X) = var(X − Z) + var(Z) Let CXii , C X−Z ii and C Z ii denote the diagonal entries of these covariance matrices, they are the variances of the components of the random vector X, X −Z and Z, respectively. That is, ( σXi )2 = ( σX−Zi )2 + ( σZi )2 ⇒ ( σXi )2 ≥ (σX−Zi )2 = (σUi )2 In our setting, X and U represents observation and surprise spaces, respectively. Therefore, the variance of each feature dimension in surprise space is smaller than that of observation space. The equality is obtained when ( σZi )2 = 0 or E (X|Y ) = E (X). That is, the SG's prediction is perfect, which is unlikely to happen in practice. F Limitations Our method assumes that surprises have patterns and can be remembered by our surprise memory. There might exist environments beyond those studied in this paper where this assumption may not hold, or surprise-based counterparts already achieve optimal exploration (e.g., perfect SG) and thus do not need SM for improvement (e.g., Freeway game). In addition, M and W require additional physical memory (RAM/GPU) than SG methods. Finally, a plug-in module like SM introduces more hyperparameters, such as N and d. Although we nd the default values of N=128 and d=16 work well across all experiments in this paper, we recommend adjustments if users apply our method to novel domains.
1. What is the focus and contribution of the paper regarding reinforcement learning? 2. What are the strengths and weaknesses of the proposed intrinsic reward function? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the limitations of the experimental analysis, and what further interrogation would be useful? 5. How could the paper improve its explanation and illustration of the proposed "surprise novelty" computation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a new intrinsic reward function for reinforcement learning, building on earlier definitions of "surprise". The proposed idea is to reward not the surprise itself but its novelty. The authors propose a specific approach to computing this intrinsic reward function and present an empirical analysis in a number of domains. Strengths And Weaknesses Strengths: The paper is in an important area of research relevant to the conference. The proposed concept for intrinsic reward is intriguing and can be useful. There is extensive empirical analysis, showing improved performance compared to existing approaches. Weakness: Experimental analysis has some limitations (see below) with the result that the general applicability and benefits of the approach are difficult to evaluate. The writing is relatively poor. Clarity, Quality, Novelty And Reproducibility Novelty: Intrinsic motivation in reinforcement learning is an active area of research. "Surprise" is a concept that has been used frequently in this literature, and so is "novelty". The current paper builds on this literature to propose a new source of intrinsic motivation, namely the novelty of the surprise, as well as a mechanism for computing a precise value of intrinsic reward . Clarity: The writing can be much improved. The main paper is repetitive and takes a long time to get to the point and to precise definitions. A large number of acronyms are introduced, which creates unnecessary difficulty for the reader. Page 1 is not the ideal placement for Figure 1, especially in its current form (see further comments on this below). Some important pieces of information are not provided in the main paper (e.g., description of the domains, which is critical for understanding the results). Some important results are relegated to the Appendices, which necessitated a lot of back-and-forth between the main paper and the appendices when reading the paper. Performance metrics should be clearly and explicitly defined in the main paper (e.g., In Figure 5, what precisely is being plotted?). The six frames and associated intrinsic rewards in Figure 1 are difficult to understand and evaluate without context. A much more useful figure would show the complete history of rewards through one or more episodes, along with associated frames at certain decision stages (see, for example, Figure 1 by Burda et al, 2018). Similarly, in Figure 3, intrinsic and extrinsic rewards can be shown as the agent is interacting with the environment. Currently, the figure shows 7 sample frames, which I did not find to be particularly informative without further context. Learning curves corresponding to Table 1 would be useful to see in the main paper. Quality: The empirical evaluation is extensive but it would be useful to see further interrogation in some areas. One question that arises is the generality of the experimental results to other domains. Much of the discussion and results in the paper are on visual domains. In addition, the domains appear to be either deterministic or stochastic in a particular way (e.g., the stochasticity in the noisy-TV domain is entirely irrelevant to the task.) Another area that deserves further interrogation is the structure of the Surprise Memory. It would be useful to see an exploration of the design choices made here (ideally, with some alternative approaches). The ablation study in section 4 is on a single domain and has a narrow scope. Finally, while I appreciate the existing efforts of the authors to illustrate the behavior of the algorithm (e.g., Figure 4), the main emphasis in the paper is on high-level performance comparison (e.g., total reward obtained), with the result that the proposed computation of "surprise novelty" is not deeply understood.
ICLR
Title Intrinsic Motivation via Surprise Memory Abstract We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits e cient exploring behaviors and signi cantly boosts the nal performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games. 1 Introduction What motivates agents to explore? Successfully answering this question would enable agents to learn e ciently in formidable tasks. Random explorations such as -greedy are ine cient in high dimensional cases, failing to learn despite training for hundreds of million steps in sparse reward games (Bellemare et al., 2016). Alternative approaches propose to use intrinsic motivation to aid exploration by adding bonuses to the environment's rewards (Bellemare et al., 2016; Stadie et al., 2015). The intrinsic reward is often proportional to the novelty of the visiting state: it is high if the state is novel (e.g. di erent from the past ones (Badia et al., 2020; 2019)) or less frequently visited (Bellemare et al., 2016; Tang et al., 2017). Another view of intrinsic motivation is from surprise, which refers to the result of the experience being unexpected, and is determined by the discrepancy between the expectation (from the gent's prediction) and observed reality (Barto et al., 2013; Schmidhuber, 2010). Technically, surprise is the di erence between prediction and observation representation vectors. The norm of the residual (i.e. prediction error) is used as the intrinsic reward. Here, we will use the terms surprise and surprise norm to refer to the residual vector and its norm, respectively. Recent works have estimated surprise with various predictive models such as dynamics (Stadie et al., 2015), episodic reachability (Savinov et al., 2018) and inverse dynamics (Pathak et al., 2017); and achieved signi cant improvements with surprise norm (Burda et al., 2018a). However, surprise-based agents tend to be overly curious about noisy or unpredictable observations (Itti and Baldi, 2005; Schmidhuber, 1991). For example, consider an agent watching a television screen showing white noise (noisy-TV problem). The TV is boring, yet the agent cannot predict the screen's content and will be attracted to the TV due to its high surprise norm. This distraction or "fake surprise" is common in partially observable Markov Decision Process (POMDP), including navigation tasks and Atari games (Burda et al., 2018b). Many works have addressed this issue by relying on the learning progress (Achiam and Sastry, 2017; Schmidhuber, 1991) or random network distillation (RND) (Burda et al., 2018b). However, the former is computationally expensive, and the latter requires many samples to perform well. This paper overcomes the "fake surprise" issue by using surprise novelty - a new concept that measures the uniqueness of surprise. To identify surprise novelty, the agent needs to compare the current surprise with surprises in past encounters. One way to do this is to equip the agent with some kind of associative memory, which we implement as an autoencoder whose task is to reconstruct a query surprise. The lower the reconstruction error, the lower the surprise novelty. A further mechanism is needed to deal with the rapid changes in surprise structure within an episode. As an example, if the agent meets the same surprise at two time steps, its surprise novelty should decline, and with a simple autoencoder this will not happen. To remedy this, we add an episodic memory, which stores intra-episode surprises. Given the current surprise, this memory can retrieve similar surprises presented earlier in the episode through an attention mechanism. These surprises act as a context added to the query to help the autoencoder better recognize whether the query surprise has been encountered in the episode or not. The error between the query and the autoencoder's output is de ned as surprise novelty, to which the intrinsic reward is set proportionally. We argue that using surprise novelty as an intrinsic reward is better than surprise norm. As in POMDPs, surprise norms can be very large since the agent cannot predict its environment perfectly, yet there may exist patterns of prediction failure. If the agent can remember these patterns, it will not feel surprised when similar prediction errors appear regardless of the surprise norms. An important emergent property of this architecture is that when random observations are presented (e.g., white noise in the noisy-TV problem), the autoencoder can act as an identity transformation operator, thus e ectively passing the noise through to reconstruct it with low error. We conjecture that the autoencoder is able to do this with the surprise rather than the observation as the surprise space has lower variance, and we show this in our paper. To make our memory system work on the surprise level, we adopt an intrinsic motivation method to generate surprise for the memory. The surprise generator (SG) can be of any kind based on predictive models and is jointly trained with the memory to optimize its own loss function. To train the surprise memory (SM), we optimize the memory's parameters to minimize the reconstruction error. Our contribution is to propose a new concept of surprise novelty for intrinsic motivation. We argue that it re ects better the environment originality than surprise norm (see motivating graphics Fig. 1). In our experiments, the SM helps RND (Burda et al., 2018b) perform well in our challenging noisy-TV problem while RND alone performs poorly. Not only with RND, we consistently demonstrate signi cant performance gain when coupling three di erent SGs with our SM in sparse-reward tasks. Finally, in hard exploration Atari games, we boost the scores of 2 strong SGs, resulting in better performance under the low-sample regime. 2 Methods 2.1 Surprise Novelty Surprise is the di erence between expectation and observation (Ekman and Davidson, 1994). If a surprise repeats, it is no longer a surprise. Based on this intuition, we hypothesize that surprises can be characterized by their novelties, and an agent's curiosity is driven by the surprise novelty rather than the surprising magnitude. Moreover, surprise novelty should be robust against noises: it is small even for random observations. For example, watching a random-channel TV can always be full of surprises as we cannot expect which channel will appear next. However, the agent should soon nd it boring since the surprise of random noises reoccurs repeatedly, and the channels are entirely unpredictable. We propose using a memory-augmented neural network (MANN) to measure surprise novelty. The memory remembers past surprise patterns, and if a surprise can be retrieved from the memory, it is not novel, and the intrinsic motivation should be small. The memory can also be viewed as a reconstruction network. The network can pass its inputs through for random, pattern-free surprises, making them retrievable. Surprise novelty has an interesting property: if some event is unsurprising (the expectation-reality residual is −→ 0 ), its surprise ( −→ 0 with norm 0) is always perfectly retrievable (surprise novelty is 0). In other words, low surprise norm means low surprise novelty. On the contrary, high surprise norm can have little surprise novelty as long as the surprise can be retrieved from the memory either through associative recall or pass-through mechanism. Another property is that the variance of surprise is generally lower than that of observation (state), potentially making the learning on surprise space easier. This property is formally stated as follows. Proposition 1. Let X and U be random variables representing the observation and surprise at the same timestep, respectively. Under an imperfect SG, the following inequality holds: ∀i : ( σXi )2 ≥ (σUi )2 where ( σXi )2 and ( σUi )2 denote the i-th diagonal elements of var(X) and var(U), respectively. Proof. See Appendix E. 2.2 Surprise Generator Since our MANN requires surprises for its operation, it is built upon a prediction model, which will be referred to as Surprise Generators (SG). In this paper, we adopt many wellknown SGs (e.g. RND (Burda et al., 2018b) and ICM (Pathak et al., 2017)) to predict the observation, compute the surprise ut and its norm for every step in the environment. The surprise norm is the Euclidean distance between the expectation and the actual observation: ‖ut‖ = ‖SG (It)−Ot‖ (1) where ut ∈ Rn is the surprise vector of size n, It the input of the SG at step t of the episode, SG (It) and Ot the SG's prediction and the observation target, respectively. The input It is speci c to the SG architecture choice, which can be the current (st) or previous state, action (st−1, at). The observation target Ot is usually a transformation (can be identical or random) of the current state st, which serves as the target for the SG's prediction. The SG is usually trained to minimize: LSG = Et [‖ut‖] (2) Here, predictable observations have minor prediction errors or little surprise. One issue is that a great surprise norm can be simply due to noisy or distractive observations. Next, we propose a remedy for this problem. 2.3 Surprise Memory The surprise generated by the SG is stored and processed by a memory network dubbed Surprise Memory (SM). It consists of an episodic memoryM and an autoencoder network W, jointly optimized to reconstruct any surprise. At each timestep, the SM receives a surprise ut from the SG module and reads content u e t from the memoryM. {uet , ut} forms a surprise query qt to W to retrieve the reconstructed q̃t. This reconstruction will be used to estimate the novelty of surprises forming intrinsic rewards rit. Fig. 2 summarizes the operations of the components of our proposed method. Our 2 memory design e ectively recovers surprise novelty by handling intra and inter-episode surprise patterns thanks toM andW, respectively. M can quickly adapt and recall surprises that occur within an episode. W is slower and focuses more on consistent surprise patterns across episodes during training. Here the query qt can be directly set to the surprise ut. However, this ignores the rapid change in surprise within an episode. Without M, when the SG and W are xed (during interaction with environments), their outputs ut and q̃t stay the same for the same input It. Hence, the intrinsic reward rit also stays the same. It is undesirable since when the agent observes the same input at di erent timesteps (e.g., I1 = I2), we expect its curiosity should decrease in the second visit (ri1 <r i 2). Therefore, we design SM withM to x this issue. The episodic memory M stores representations of surprises that the agent encounters during an episode. For simplicity,M is implemented as a rst-in- rst-out queue whose size is xed as N . Notably, the content of M is wiped out at the end of each episode. Its information is limited to a single episode. M can be viewed as a matrix: M ∈ RN×d, where d is the size of the memory slot. We denote M (j) as the j-th row in the memory, corresponding to the surprise ut−j . To retrieve fromM a read-out uet that is close to ut, we perform content-based attention (Graves et al., 2014) to compute the attention weight as wt (j) = (utQ)M(j)> ‖(utQ)‖‖M(j)‖ . The read-out fromM is then u e t = wtMV ∈ Rn. Here, Q ∈ Rn×d and V ∈ Rd×n are learnable weights mapping between the surprise and the memory space. To force the read-out close to ut, we minimize: LM = Et [‖uet − ut‖] (3) The read-out and the SG's surprise form the query surprise to W: qt = [uet , ut] ∈ R2n. M stores intra-episode surprises to assist the autoencoder in preventing the agent from exploring fake surprise within the episode. Since we train the parameters to reconstruct ut using past surprises in the episode, if the agent visits a state whose surprise is predictable from those in M, ‖uet − ut‖ should be small. Hence, the read-out context uet contains no extra information than ut and reconstructing qt fromW becomes easier as it is equivalent to reconstructing ut. In contrast, visiting diverse states leads to a more novel read-out u e t and makes it more challenging to reconstruct qt, generally leading to higher intrinsic reward. The autoencoder network W can be viewed as an associative memory of surprises that persist across episodes. At timestep t in any episode during training, W is queried with qt to produce a reconstructed memory q̃t. The surprise novelty is then determined as: rit = ‖q̃t − qt‖ (4) which is the norm of the surprise residual q̃t − qt. It will be normalized and added to the external reward as an intrinsic reward bonus. The details of computing and using normalized intrinsic rewards can be found in Appendix C. We implementW as a feed-forward neural network that learns to reconstruct its own inputs. This kind of autoencoder has been shown to be equivalent to an associative memory that supports memory encoding and retrieval through attractor dynamics (Radhakrishnan et al., 2020). The query surprise is encoded to the weights of the network via backpropagation as we minimize the reconstruction loss below: LW = Et [ rit ] = Et [‖W (qt)− qt‖] (5) Here, q̃t = W (qt). Intuitively, it is easier to retrieve non-novel surprises experienced many times in past episodes. Thus, the intrinsic reward is lower for states that leads to these familiar surprises. On the contrary, rare surprises are harder to retrieve, which results in high reconstruction errors and intrinsic rewards. W is like a long-term inter-episode associative memory. Unlike slot-based memories, it has a xed memory capacity, can compress information and learn data representations. We could store the surprise in a slot-based memory across episodes, but the size of this memory would be autonomous, and the data would be stored redundantly. Hence, the quality of the stored surprise will reduce as more and more observations come in. Readers can refer to Appendix A to see the architecture details and how W can be interpreted as implementing associative memory. The whole system SG+SM is trained end-to-end by minimizing the following loss: L = LSG +LM +LW . Here, we block the gradients from LW backpropagated to the parameters of SG to avoid trivial reconstructions of qt. Pseudocode of our algorithm is presented in Appendix B. 3 Experimental Results 3.1 Noisy-TV: Robustness against Noisy Observations We use Noisy-TV, an environment designed to fool exploration methods (Burda et al., 2018b; Savinov et al., 2018), to con rm that our method can generate intrinsic rewards that (1) are more robust to noises and (2) can discriminate rare and common observations through surprise novelty. We simulate this problem by employing a 3D maze environment with a random map structure. The TV is not xed in speci c locations in the maze to make it more challenging. Instead, the agent brings the TV with it and can choose to watch TV anytime. Hence, there are three basic actions (turn left, right, and move forward) plus an action: watch TV. When taking this action, the agent will see a white noise image sampled from standard normal distribution and thus, the number of TV channels can be considered in nity. The agent's state is an image of its viewport, and its goal is to search for a red box randomly placed in the maze (+1 reward if the agent reaches the goal). The baseline is RND (Burda et al., 2018b), a simple yet strong SG that is claimed to obviate the stochastic problems of Noisy-TV. Our SG+SM model uses RND as the SG, so we name it RND+SM. Since our model and the baseline share the same RND architecture, the di erence in performance must be attributed to our SM. Fig. 3 (a) illustrates the mean-normalized intrinsic rewards (MNIR)1 measured at di erent states in our Noisy-TV environment. The rst two states are noises, the following three states are common walls, and the last two are ones where the agent sees the box. The 1See Appendix C for more information on this metric. MNIR bars show that both models are attracted mainly by the noisy TV, resulting in the highest MNIRs. However, our model with SM su ers less from noisy TV distractions since its MNIR is lower than RND's. We speculate that SM is able to partially reconstruct the whitenoise surprise via pass-through mechanism, making the normalized surprise novelty generally smaller than the normalized surprise norm in this case. That mechanism is enhanced in SM with surprise reconstruction (see Appendix D.1 for explanation). On the other hand, when observing red box, RND+SM shows higher MNIR than RND. The di erence between MNIR for common and rare states is also more prominent in RND+SM than in RND because RND prediction is not perfect even for common observations, creating relatively signi cant surprise norms for seeing walls. The SM xes that issue by remembering surprise patterns and successfully retrieving them, producing much smaller surprise novelty compared to those of rare events like seeing red box. Consequently, the agent with SM outperforms the other by a massive margin in task rewards (Fig. 3 (b)). As we visualize the number of watching TV actions and the value of the intrinsic reward by RND+SM and RND over training time, we realize that RND+SM helps the agent take fewer watching actions and thus, collect smaller amounts of intrinsic rewards compared to RND. We also verify that our proposed method outperforms a simpli ed version of SM using counts to measure surprise novelty and a vanilla baseline that does not use intrinsic motivation. The details of these results are given in Appendix D.1. 3.2 MiniGrid: Compatibility with Different Surprise Generators We show the versatility of our framework SG+SM by applying SM to 4 SG backbones: RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and autoencoderAE (see Appendix D.2 for implementation details). We test the models on three tasks from MiniGrid environments: Key-Door (KD), Dynamic-Obstacles (DO) and Lava-Crossing (LC) (Chevalier-Boisvert et al., 2018). If the agent reaches the goal in the tasks, it receives a +1 reward. Otherwise, it can be punished with negative rewards if it collides with obstacles or takes too much time to nish the task. These environments are not stochastic as the Noisy-TV but they still contain other types of distraction. For example, in KD, the agent can be attracted to irrelevant actions such as going around to drop and pick the key. In DO, instead of going to the destination, the agent may chase obstacle balls ying around the map. In LC the agent can commit unsafe actions like going near lava areas, which are di erent from typical paths. In any case, due to reward sparsity, intrinsic motivation is bene cial. However, surprise alone may not be enough to guide an e cient exploration since the observation can be too complicated for SG to minimize its prediction error. Thus, the agent quickly feels surprised, even in unimportant states. Table 1 shows the average returns of the models for three tasks. The Baseline is the PPO backbone trained without intrinsic reward. RND, ICM, NGU and AE are SGs providing the PPO with surprise-norm rewards while our method SG+SM uses surprise-novelty rewards. The results demonstrate that models with SM often outperform SG signi cantly and always contain the best performers. Notably, in the LC task, SGs hinder the performance of the Baseline because the agents are attracted to dangerous vivid states, which are hard to predict but cause the agent's death. The SM models avoid this issue and outperform the Baseline for the case of ICM+SM. Compared to AE, which computes intrinsic reward based on the novelty of the state, AE+SM shows a much higher average score in all tasks. That manifests the importance of modeling the novelty of surprise instead of states. To analyze the di erence between the SG+SM and SG's MNIR structure, we visualize the MNIR for each cell in the map of Key-Door in Appendix's Figs. 5 (b) and (c). We create a synthetic trajectory that scans through all the cells in the big room on the left and, at each cell, uses RND+SM and RND models to compute the corresponding surprise-norm and surprise-novelty MNIRs, respectively. As shown in Fig. 5 (b), RND+SM selectively identi es truly surprising events, where only a few cells have high surprise-novelty MNIR. Here, we can visually detect three important events that receive the most MNIR: seeing the key (bottom row), seeing the door side (in the middle of the rightmost column) and approaching the front of the door (the second and fourth rows). Other less important cells are assigned very low MNIR. On the contrary, RND often gives high surprise-norm MNIR to cells around important ones, which creates a noisy MNIR map as in Fig. 5 (c). As a result, RND's performance is better than Baseline, yet far from that of RND+SM. Another analysis of how surprise novelty discriminates against surprises with similar norms is given in Appendix's Fig. 8. 3.3 Atari: Sample-efficient Benchmark We adopt the sample-e ciency Atari benchmark (Kim et al., 2019) on six hard exploration games where the training budget is only 50 million frames. We use our SM to augment 2 SGs: RND (Burda et al., 2018b) and LWM (Ermolov and Sebe, 2020). Unlike RND, LWM uses a recurrent world model and forward dynamics to generate surprises. Details of the SGs, training and evaluation are in Appendix D.3. We run the SG and SG+SM in the same codebase and setting. Table 2 reports our and representative results from prior works, showing SM-augmented models outperform their SG counterparts in all games (same codebase). In Frostbite and Montezuma Revenge, RND+SM's score is almost twice as many as that of RND. For LWM+SM, games such as Gravitar and Venture observe more than 40% improvement. Overall, LWM+SM and RND+SM achieve the best mean and median human normalized score, improving 16% and 22% w.r.t the best SGs, respectively. Notably, RND+SM shows signi cant improvement for the notorious Montezuma Revenge. We also verify the bene t of the SM in the long run for Montezuma Revenge and Frostbite. As shown in Fig. 4 (a,b), RND+SM still signi cantly outperforms RND after 200 million training frames, achieving average scores of 10,000 and 9,000, respectively. The result demonstrates the scalability of our proposed method. When using RND and RND+SM to compute the average MNIR in several rooms in Montezuma Revenge (Fig. 1), we realize that SM makes MNIR higher for surprising events in rooms with complex structures while depressing the MNIR of fake surprises in dark rooms. Here, even in the dark room, the movement of agents (human or spider) is hard to predict, leading to a high average MNIR. On the contrary, the average MNIR of surprise novelty is reduced if the prediction error can be recalled from the memory. Finally, measuring the running time of the models, we notice little computing overhead caused by our SM. On our Nvidia A100 GPUs, LWM and LWM+SM's average time for one 50M training are 11h 38m and 12h 10m, respectively. For one 200M training, RND and RND+SM's average times are 26h 24m and 28h 1m, respectively. These correspond to only 7% more training time while the performance gap is signi cant (4000 scores). 3.4 Ablation Study Role of Memories Here, we use Minigrid's Dynamic-Obstacle task to study the role of M and W in the SM (built upon RND as the SG). Disabling W, we directly use ‖qt‖ = ‖[uet , ut]‖ as the intrinsic reward, and name this version: SM (no W). To ablate the e ect ofM, we remove uet from qt and only use qt = ut as the query to W, forming the version: SM (no M). We also consider di erent episodic memory capacity and slot size N -d= {32− 4, 128− 16, 1024− 64}. As N and d increase, the short-term context expands and more past surprise information is considered in the attention. In theory, a bigM is helpful to capture long-term and more accurate context for constructing the surprise query. Fig. 4 (c) depicts the performance curves of the methods after 10 million training steps. SM (no W) and SM (noM) show weak signs of learning, con rming the necessity of both modules in this task. Increasing N -d from 32−4 to 1024−64 improves the nal performance. However, 1024− 64 is not signi cantly better than 128− 16, perhaps because it is unlikely to have similar surprises that are more than 128 steps apart. Thus, a larger attention span does not provide a bene t. As a result, we keep using N = 128 and d = 16 in all other experiments for faster computing. We also verify the necessity ofM and W in Montezuma Revenge and illustrate how M generates lower MNIR when 2 similar event occurs in the same episode in Key-Door (see Appendix D.4). No Task Reward In this experiment, we remove task rewards and merely evaluate the agent's ability to explore using intrinsic rewards. The task is to navigate 3D rooms and get a +1 reward for picking an object (Chevalier-Boisvert, 2018). The state is the agent's image view, and there is no noise. Without task rewards, it is crucial to maintain the agent's interest in unique events of seeing the objects. In this partially observable environment, surprise-prediction methods may struggle to explore even without noise due to lacking information for good predictions, leading to usually high prediction errors. For this testbed, we evaluate random exploration agent (Baseline), RND and RND+SM in 2 settings: 1 room with three objects (easy), and 4 rooms with one object (hard). To see the di erence among the models, we compare the cumulative task rewards over 100 million steps (see Appendix D.4 for details). RND is even worse than Baseline in the easy setting because predicting causes high biases (intrinsic rewards) towards the unpredictable, hindering exploration if the map is simple. In contrast, RND+SM uses surprise novelty, generally showing smaller intrinsic rewards (see Appendix Fig. 12 (right)). Consequently, our method consistently demonstrates signi cant improvements over other baselines (see Fig. 4 (d) for the hard setting). 4 Related works Intrinsic motivation approaches usually give the agent reward bonuses for visiting novel states to encourage exploration. The bonus is proportional to the mismatch between the predicted and reality, also known as surprise (Schmidhuber, 2010). One kind of predictive model is the dynamics model, wherein the surprise is the error of the models as predicting the next state given the current state and action (Achiam and Sastry, 2017; Stadie et al., 2015). One critical problem of these approaches is the unwanted bias towards transitions where the prediction target is a stochastic function of the inputs, commonly found in partially observable environments. Recent works focus on improving the features of the predictor's input by adopting representation learning mechanisms such as inverse dynamics (Pathak et al., 2017), variational autoencoder, random/pixel features (Burda et al., 2018a), or whitening transform (Ermolov and Sebe, 2020). Although better representations may improve the reward bonus, they cannot completely solve the problem of stochastic dynamics and thus, fail in extreme cases such as the noisy-TV problem (Burda et al., 2018b). Besides dynamics prediction, several works propose to predict other quantities as functions of the current state by using autoencoder (Nylend, 2017), episodic memory (Savinov et al., 2018), and random network (Burda et al., 2018b). Burda et al. (2018) claimed that using a deterministic random target network is bene cial in overcoming stochasticity issues. Other methods combine this idea with episodic memory and other techniques, achieving good results in large-scale experiments (Badia et al., 2020; 2019). From an information theory perspective, the notation of surprise can be linked to information gain or uncertainty, and predictive models can be treated as parameterized distributions (Achiam and Sastry, 2017; Houthooft et al., 2016; Still and Precup, 2012). Furthermore, to prevent the agent from unpredictable observations, the reward bonus can be measured by the progress of the model's prediction (Achiam and Sastry, 2017; Lopes et al., 2012; Schmidhuber, 1991). However, these methods are complicated and hard to scale, requiring heavy computing. A di erent angle to handle stochastic observations during exploration is surprsie minimization (Berseth et al., 2020; Rhinehart et al., 2021). In this direction, the agents get bigger rewards for seeing more familiar states. Such a strategy is somewhat opposite to our approach and suitable for unstable environments where the randomness occurs separately from the agents' actions. These earlier works rely on the principle of using surprise as an incentive for exploration and di er from our principle that utilizes surprise novelty. Also, our work augments these existing works with a surprise memory module and can be used as a generic plug-in improvement for surprise-based models. We note that our memory formulation di ers from the memorybased novelty concept using episodic memory (Badia et al., 2019), momentum memory (Fang et al., 2022), or counting (Bellemare et al., 2016; Tang et al., 2017) because our memory operates on the surprise level, not the state level. In our work, exploration is discouraged not only in frequently visited states but also in states whose surprises can be reconstructed using SM. Our work provides a more general and learnable novelty detection mechanism, which is more exible than the nearest neighbour search or counting lookup table. 5 Discussion This paper presents Surprise Generator-Surprise Memory (SG+SM) framework to compute surprise novelty as an intrinsic motivation for the reinforcement learning agent. Exploring with surprise novelty is bene cial when there are repeated patterns of surprises or random observations. For example, in the Noisy-TV problem, our SG+SM can harness the agent's tendency to visit noisy states such as watching random TV channels while encouraging it to explore rare events with distinctive surprises. We empirically show that our SM can supplement three surprise-based SGs to achieve more rewards in fewer training steps in three grid-world environments. In 3D navigation without external reward, our method signi cantly outperforms the baselines. On two strong SGs, our SM also achieve superior results in hard-exploration Atari games within 50 million training frames. Even in the long run, our method maintains a clear performance gap from the baselines, as shown in Montezuma Revenge and Frostbite. If we view surprise as the rst-order error between the observation and the predicted, surprise novelty the retrieval error between the surprise and the reconstructed memory, is essentially the second-order error. It would be interesting to investigate the notion of higher-order errors, study their theoretical properties, and utilize them for intrinsic motivation in our future work. A W as Associative Memory This section will connect the associative memory concept to neural networks trained with the reconstruction loss as in Eq. 5. We will show how the neural network (W) stores and retrieves its data. We will use 1-layer feed-forward neural network W to simplify the analysis, but the idea can extend to multi-layer feed-forward neural networks. For simplicity, assumingW is a square matrix, the objective is to minimize the di erence between the input and the output of W : For simplicity, assuming W is a square matrix, the objective is to minimize the di erence between the input and the output of W : L = ‖Wx− x‖22 (6) Using gradient descent, we update W as follow, W ←W − α ∂L ∂W ←W − 2α (Wx− x)x> ←W − 2αWxx> + 2αxx> ←W ( I − 2αxx> ) + 2αxx> where I is the identity matrix, x is the column vector. If a batch of inputs {xi}Bi=1 is used in computing the loss in Eq. 6, at step t, we update W as follows, Wt =Wt−1 (I − αXt) + αXt where Xt = 2 ∑B i=1 xix > i . From t = 0, after T updates, the weight becomes WT =W0 T∏ t=1 (I − αXt)− α2 T∑ t=2 XtXt−1 T∏ k=t+1 (I − αXk) + α T∑ t=1 Xt (7) Given the form of Xt, Xt is symmetric positive-de nite. Also, as α is often very small (0<α 1), we can show that ‖I − αXt‖ < 1 − λmin (αXt) < 1. This means as T →∞, ∥∥∥W0∏Tt=1 (I − αXt)∥∥∥→ 0 and thus, WT → α2∑Tt=2XtXt−1∏Tk=t+1 (I − αXk) + α ∑T t=1Xt independent from the initialization W0. Eq. 7 shows how the data (Xt) is integrated into the neural network weight Wt. The other components such as α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk) can be viewed as additional encoding noise. Without these components (by assuming α is small enough), WT ≈ α T∑ t=1 Xt = α T∑ t=1 B∑ i=1 xi,tx > i,t or equivalently, we have the Hebbian update rule: W ←W + xi,t ⊗ xi,t where W can be seen as the memory, ⊗ is the outer product and xi,t is the data or item stored in the memory. This memory update is the same as that of classical associative memory models such as Hop eld network and Correlation Matrix Memory (CMM) . Given a query q, we retrieve the value in W as output of the neural network: q′ = q>W = q>R+ α T∑ t=1 qXt = q>R+ 2α T∑ t=1 B∑ i=1 q>xi,tx > i,t where R = W0 ∏T t=1 (I − αXt) − α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk). If q is presented to the memory W in the past as some xj , q ′ can be represented as: q′ = q>R+ 2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t + 2αq > (qq>) = q>R︸︷︷︸+2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t︸ ︷︷ ︸+2α ‖q‖ q > noise cross talk Assuming that the noise is insigni cant thanks to small α, we can retrieve exactly q given that all items in the memory are orthogonal2. As a result, after scaling q′ with 1/2α, the retrieval error ( ∥∥∥ q′2α − q∥∥∥) is 0. If q is new to W , the error will depend on whether the items stored in W are close to q. Usually, the higher the error, the more novel q is w.r.t W . B SM's Implementation Detail In practice, the short-term memoryM is a tensor of shape [B,N, d] where B is the number of actors, N the memory length and d the slot size. B is the SG's hyperparameters and tuned depending on tasks based on SG performance. For example, for the Noisy-TV, we tune RND as the SG, obtaining B = 64 and directly using them for M. N and d are the special hyperparameters of our method. As mentioned in Sec. 3.4, we x N = 128 and d = 16 in all experiments. As B increases in large-scale experiments, memory storage for M can be demanding. To overcome this issue, we can use the uniform writing trick to optimally preserve information while reducing N (Le et al., 2019). Also, for W, by using a small hidden size, we can reduce the requirement for physical memory signi cantly. Practically, in all experiments, we implement W as a 2-layer feedforward neural network with a hidden size of 32 (2n → 32 → 2n). The activation is tanh. With n = 512 d = 16, the number of parameters of W is only about 65K. Also, Q ∈ Rn×d and V ∈ Rd×n have about 8K parameters. In total, our SM only introduces less than 90K trainable parameters, which are marginal to that of the SG and policy/value networks (up to 10 million parameters). The join training of SG+SM is presented in Algo. 2. We note that vector notations in the algorithm are row vectors. For simplicity, the algorithm assumes 1 actor. In practice, our algorithm works with multiple actors and mini-batch training. C Intrinsic Reward Normalization Following (Burda et al., 2018b), to make the intrinsic reward on a consistent scale, we normalized the intrinsic reward by dividing it by a running estimate of the standard deviations 2By certain transformation, this condition can be reduced to linear independence Algorithm 1 Intrinsic rewards computing via SG+SM framework. Require: ut, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Compute LSG = ‖ut‖ 2: QueryM with ut, retrieve uet = wtMV where wt is the attention weight 3: Compute LM = ‖uet − ut.detach()‖ 4: Query W with qt = [uet , ut], retrieve q̃t =W(qt) 5: Compute intrinsic reward rit = LW = ‖q̃t − qt.detach()‖ 6: return LSG, LM, LW Algorithm 2 Jointly training SG+SM and the policy. Require: bu er, policy πθ, surprise-based predictor SG, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Initialize πθ, SG, Q, W 2: for iteration = 1, 2, ... do 3: for t = 1, 2, ...T do 4: Execute policy πθ to collect st, at, rt, forming input It = st, ... and target Ot 5: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 6: Compute intrinsic reward rit using Algo. 1 7: Compute nal reward rt ← rt + βrit/rstdt 8: Add (It, Ot, st−1, st, at, rt) to bu er 9: Add utQ toM 10: if done episode then clearM 11: end for 12: for k = 1, 2, ..,K do 13: Sample It, Ot from bu er 14: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 15: Compute LSG, LM, LW using Algo. 1 16: Update SG, Q and W by minimizing the loss L = LSG + LM + LW 17: Update πθ with sample (st−1, st, at, rt) from bu er using backbone algorithms 18: end for 19: end for of the intrinsic returns. This normalized intrinsic reward (NIR) will be used for training. In addition, there is a hyperparameter named intrinsic reward coe cient to scale the intrinsic contribution relatively to the external reward. We denote the running mean's standard deviations and intrinsic reward coe cient as rstdt and β, respectively, in Algo. 2. In our experiments, if otherwise stated, β = 1. We note that when comparing the intrinsic reward at di erent states in the same episode (as in the experiment section), we normalize intrinsic rewards by subtracting the mean, followed by a division by the standard deviation of all intrinsic rewards in the episode. Hence, the mean-normalized intrinsic reward (MNIR) in these experiments is di erent from the one used in training and can be negative. We argue that normalizing with mean and std. of the episode's intrinsic rewards is necessary to make the comparison reasonable. For example, in an episode, method A assigns all steps with intrinsic rewards of 200; and method B assigns novel steps with intrinsic rewards of 1 while others 0. Clearly, method A treats all steps in the episode equal, and thus, it is equivalent to giving no motivation for all of the steps in the episode (the learned policy will not motivate the agent to visit novel states). On the contrary, method B triggers motivation for novel steps in the episodes (the learned policy will encourage visits to novel states). Without normalizing by mean subtraction, it is tempting to conclude that the relative intrinsic reward of method A for a novel step is higher, which is technically incorrect. D Experimental Details D.1 Noisy-TV We create the Noisy-TV environment by modifying the Maze environment (MazeS3Fast-v0) in the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The backbone RL algorithm is PPO. We adopt a public code repository for the implementation of PPO and RND (MIT License)3. In this environment, the state is an image of the agent's viewport. The details of architecture and hyperparameters of the backbone and RND is presented in Table 4. Most of the setting is the same as in the repository. We only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) to suit our hardware and the task. After tuning with RND, we use the same setting for our RND+SM. Fig. 6 reports all results for this environment. Fig. 6 (a) compares the nal intrinsic reward (IR) generated by RND and RND+SM over training time. Overall, RND's IR is always higher than RND+SM's, indicating that our method is signi cantly reduces the attention of the agent to the noisy TV by assigning less IR to watching TV. Fig. 6 (b) compares the number of noisy actions between two methods where RND+SM consistently shows fewer watching TV actions. That con rms RND+SM agent is less distracted by the TV. As mentioned in the main text, RND+SM is better at handling noise than RND. Note that RND aims to predict the transformed states by minimizing ‖SG (st)− fR(st)‖ where fR is a xed neural network initialized randomly. If RND can learns the transformation, it can passthrough the state, which is similar to reconstruction in an autoencoder. However, learning fR can be harder and require more samples than learning an identity transformation since fR is non-linear and complicated. Hence, it may be more challenging for RND to pass-through the noise than SM. Another possible reason lies in the operating space (state vs. surprise). If we treat white noise as a random variable X, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ] − X where Y is a random factor that a ects the training of the surprise generator. The factor Y makes the SG produce imperfect reconstruction E [X|Y ]4. Here, SG and SM learn to reconstruct X and U , respectively. We can prove that the variance of each feature dimension in U is smaller than that of X (see Sec. E). Learning an autoencoder on surprise space is more bene cial than in state space since the data has less variance and thus, it may require less data points to learn the data distribution. Fig. 6 (c) reports performance of all baselines. Besides RND and RND+SM, we also include PPO without intrinsic reward as the vanilla Baseline for reference. In addition, we investigate a simple implementation of SM using count-based method to measure surprise novelty. Concretely, we use SimHash algorithm to count the number of surprise c(ut) in a similar manner as (Bellemare et al., 2016) and name the baseline RND+SM (count). The 3https://github.com/jcwleo/random-network-distillation-pytorch 4In this case, the perfect reconstruction is E [X] intrinsic reward is then β/ √ c(ut). We tune the hyperparameter β = {0.5, 1, 5} and the hash matrix size kh = {32, 64, 128, 256} and use the same normalization and training process to run this baseline. We report the learning curves of the best variant with β = 0.5 and kh = 128. The result demonstrates that the proposed SM using memory-augmented neural networks outperforms the count-based SM by a signi cant margin. One possible reason is that count-based method cannot handle white noise: it always returns high intrinsic rewards. In contrast, our SM can somehow reconstruct white noise via pass-through mechanism and thus reduces the impact of fake surprise on learning. Also, the proposed SM is more exible than the count-based counterpart since it learns to reconstruct from the data rather than using a x counting scheme. The result also shows that RND+SM outperforms the vanilla Baseline. Although the improvement is moderate (0.9 vs 0.85), the result is remarkable since the Noisy-TV is designed to fool intrinsic motivation methods and among all, only RND+SM can outperform the vanilla Baseline. D.2 MiniGrid The tasks in this experiment are from the MiniGrid library (Apache License) (ChevalierBoisvert et al., 2018). In MiniGrid environments, the state is a description vector representing partial observation information such as the location of the agents, objects, moving directions, etc. The three tasks use hardest maps: • DoorKey: MiniGrid-DoorKey-16x16-v0 • LavaCrossing: MiniGrid-LavaCrossingS11N5-v0 • DynamicObstacles: MiniGrid-Dynamic-Obstacles-16x16-v0 The SGs used in this experiment are RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and AE. Below we describe the input-output structure of these SGs. • RND: It = st and Ot = fR (st) where st is the current state and fR is a neural network that has a similar structure as the prediction network, yet its parameters are initialized randomly and xed during training. • ICM: It = (st−1, at) and Ot = st where s is the embedding of the state and a the action. We note that in addition to the surprise loss (Eq. 2), ICM is trained with inverse dynamics loss. • NGU: This agent reuses the RND as the SG (It = st and Ot = fR (st)) and combines the surprise norm with an KNN episodic reward. When applying our SM to NGU, we only take the surprise-based reward as input to the SM. The code for NGU is based on this public repository https://github.com/opendilab/DI-engine. • AE: It = st and Ot = st where s is the embedding of the state. This SG can be viewed as an associative memory of the observation, aiming to remember the states. This baseline is designed to verify the importance of surprise modeling. Despite sharing a similar architecture, it di ers from our SM, which operates on surprise and have an augmented episodic memory to support reconstruction. The backbone RL algorithm is PPO. The code for PPO and RND is the same as in Sec. D.1. We adopt a public code repository for the implementation of ICM (MIT License)5. We implement AE ourselves using a 3-layer feed-forward neural network. For the SGs, we only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) for the DoorKey task. We also tune the architecture of the AE (number of layers: 1,2 or 3, activation tanh or ReLU) on the DoorKey task. After tuning the SGs, we use the same setting for our SG+SM. The detailed con gurations of the SGs for this experiment are reported in Table 3 and Table 4. The full learning curves of the backbone (Baseline), SG and SG+SM are given in Fig. 7. To visualize the di erence between surprise and surprise residual vectors, we map these in the synthetic trajectory to 2-dimensional space using t-SNE projection in Fig. 8. The surprise points show clustered patterns for high-MNIR states, which con rms our hypothesis that there exist familiar surprises (they are highly surprising due to high norm, yet repeated). In contrast, the surprise residual estimated by the SM has no high-MNIR clusters. The SM transforms clustered surprises to scatter surprise residuals, resulting in a broader range of MNIR, thus showing signi cant discrimination on states that have similar surprise norm. D.3 Atari The Atari 2600 Games task involves training an agent to achieve high game scores. The state is a 2d image representing the screen of the game. 5https://github.com/jcwleo/curiosity-driven-exploration-pytorch Surprise Surprise Residual SG and RL backbone implementations We use 2 SGs: RND and LWM. RND uses a PPO backbone as in previous sections. On the other hand, LWM uses DQN backbone with CNN-based encoder and GRU-based value function. The LWM SG uses GRU to model forward dynamics of the environment and thus its input is: It = (st−1, at, ht−1) where st−1 is the embedding of the previous state, at the current action, and ht−1 the hidden state of the world model GRU. The target Ot is the embedding of the current state st. RND follows the same implementation as in previous experiments. We use the public code of LWM provided by the authors6 to implement LWM. The hyperparameters of RND and LWM are tuned by the repository's owner (see Table 4 for RND and refer to the code or the original paper (Ermolov and Sebe, 2020) for the details of LWM implementation). We augment them with our SM of default hyperparameters N = 128, d = 16. Training and evaluation We follow the standard training for Atari games, such as stacking four frames and enabling sticky actions. All the environments are based on OpenAI's gym-atari's NoFrameskip-v4 variants (MIT Liscence)7 . After training, we evaluate the models by measuring the average return over 128 episodes and report the results in Table. 2. Depending on the setting, the models are trained for 50 or 200 million frames. Results Fig. 9 demonstrates the learning curves of all models in 6 Atari games under the low-sample regime. LWM+SM and RND+SM clearly outperfrom LWM and RND in Frostbite, Venture, Gravitar, Solaris and Frostbite, Venture, Gravitar and MontezumaRevenge, respectively. Table 5 reports the results of more baselines. D.4 Ablation study Role of Memories We conduct more ablation studies to verify the need for the shortM and long-term (W) memory in our SM. We design additional baselines SM (no W) and SM (no M) (see Sec. 3.4), and compare them with the SM with full features in Montezuma Revenge and Frostbite task. Fig. 10 (a) shows that only SM (full) can reach an average score of more than 5000 after 50 million training frames. Other ablated baselines can only achieve around 2000 scores. 6https://github.com/htdt/lwm 7https://github.com/openai/gym We also shows the impact of the episodic memory in decreasing the intrinsic rewards for similar states as discussed in Sec. 2.3. We select 3 states in the MiniGrid's KeyDoor task and computes the MNIR for each state, visualized in Fig. 11. At the step-1 state, the MNIR is low since there is nothing special in the view of the agent. At the step-15 state, the agent rst sees the key, and get a high MNIR. At the step-28 state, the agent drops the key and sees the key again. This event is still more interesting than the step-1 state. However, the view is similar to the one in step 15, and thus, the MNIR decreases from 0.7 to 0.35 as expected. No Task Reward The tasks in this experiment are from the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The two tasks are: • Easy: MiniWorld-PickupObjs-v0 • Hard: MiniWorld-FourRooms-v0 The backbone and SG are the same as in Sec. D.1. We remove the task/external reward in this experiment. For the Baseline, without task reward, it receives no training signal and thus, showing a similar behavior as a random agent. Fig. 12 illustrates the running average of cumulative task return and the intrinsic reward over training steps. In the Easy mode, the random Baseline can even perform better than RND, which indicates that biased intrinsic reward is not always helpful. RND+SM, in both modes, shows superior performance, con rming that its intrinsic reward is better to guide the exploration than that of RND. E Theoretical Property of Surprise Space's Variance Let X be a random variable representing the observation at some timestep, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ]−X where Y is a random factor that a ect the prediction of SG and makes it produce imperfect reconstruction E [X|Y ] instead of E [X]. For instance, in the case of an autoencoder AE as the SG, X and U are stand AE(st)− st, respectively. Let us denote Z = E (X|Y ), then E [Z|Y ] = Z and E [ Z2|Y ] = Z2. We have var (X) = var(X − Z + Z) = var(X − Z) + var(Z) + 2cov(X − Z,Z) = var(X − Z) + var(Z) + 2E[(X − Z)Z]− 2E[X − Z]E[Z] Using the Law of Iterated Expectations, we have E[X − Z] = E[E[X − Z|Y ]] = E[E [X|Y ]− E [Z|Y ]] = E [Z − Z] = 0 and E[(X − Z)Z] = E[E[(X − Z)Z|Y ]] = E[E[XZ − Z2|Y ]] = E[E (XZ|Y )− E ( Z2|Y ) ] = E[ZE (X|Y )− Z2] = E[Z2 − Z2] = 0 Therefore, var (X) = var(X − Z) + var(Z) Let CXii , C X−Z ii and C Z ii denote the diagonal entries of these covariance matrices, they are the variances of the components of the random vector X, X −Z and Z, respectively. That is, ( σXi )2 = ( σX−Zi )2 + ( σZi )2 ⇒ ( σXi )2 ≥ (σX−Zi )2 = (σUi )2 In our setting, X and U represents observation and surprise spaces, respectively. Therefore, the variance of each feature dimension in surprise space is smaller than that of observation space. The equality is obtained when ( σZi )2 = 0 or E (X|Y ) = E (X). That is, the SG's prediction is perfect, which is unlikely to happen in practice. F Limitations Our method assumes that surprises have patterns and can be remembered by our surprise memory. There might exist environments beyond those studied in this paper where this assumption may not hold, or surprise-based counterparts already achieve optimal exploration (e.g., perfect SG) and thus do not need SM for improvement (e.g., Freeway game). In addition, M and W require additional physical memory (RAM/GPU) than SG methods. Finally, a plug-in module like SM introduces more hyperparameters, such as N and d. Although we nd the default values of N=128 and d=16 work well across all experiments in this paper, we recommend adjustments if users apply our method to novel domains.
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its novel use of memory? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the integration of the proposed method with existing surprise generators?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new method to manage surprise signals in reinforcement learning. Surprises are stored in a memory for each episode and the memory is wiped after each episode. An autoencoder is used to perform readouts. The memory module can be plugged in existing surprise generators. Benchmark results show that the addition of the memory is almost always beneficial. The main difference with respect to competing memory based approach in RL is that the proposed memory works at the surprise level and not at the state level. Strengths And Weaknesses Strengths The use of memory on the surprise level is novel and well motivated The approach can be integrated easily on existing methods Weaknesses Novelty is slightly limited, memory has been used here https://arxiv.org/pdf/2002.06038.pdf as also mentioned in the paper but with a different data to be stored (agent state vs surprise) Results show the improvement with respect to three methods but there are no direct comparison with existing sota. Specifically whye the only other memory based approach (https://arxiv.org/pdf/2002.06038.pdf) is not taken into account in the comparison? In general a table with recent sota methods reported would improve the overall presentation. Clarity, Quality, Novelty And Reproducibility The paper is clear, although I believe a better definition of the learning problem onto which the MANN approach is then plugged in would help also non purely RL researchers in benefitting from this work. Novelty is slightly limited (see above) The presentation is clear enough for a field expert to reproduce the work.
ICLR
Title Intrinsic Motivation via Surprise Memory Abstract We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits e cient exploring behaviors and signi cantly boosts the nal performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games. 1 Introduction What motivates agents to explore? Successfully answering this question would enable agents to learn e ciently in formidable tasks. Random explorations such as -greedy are ine cient in high dimensional cases, failing to learn despite training for hundreds of million steps in sparse reward games (Bellemare et al., 2016). Alternative approaches propose to use intrinsic motivation to aid exploration by adding bonuses to the environment's rewards (Bellemare et al., 2016; Stadie et al., 2015). The intrinsic reward is often proportional to the novelty of the visiting state: it is high if the state is novel (e.g. di erent from the past ones (Badia et al., 2020; 2019)) or less frequently visited (Bellemare et al., 2016; Tang et al., 2017). Another view of intrinsic motivation is from surprise, which refers to the result of the experience being unexpected, and is determined by the discrepancy between the expectation (from the gent's prediction) and observed reality (Barto et al., 2013; Schmidhuber, 2010). Technically, surprise is the di erence between prediction and observation representation vectors. The norm of the residual (i.e. prediction error) is used as the intrinsic reward. Here, we will use the terms surprise and surprise norm to refer to the residual vector and its norm, respectively. Recent works have estimated surprise with various predictive models such as dynamics (Stadie et al., 2015), episodic reachability (Savinov et al., 2018) and inverse dynamics (Pathak et al., 2017); and achieved signi cant improvements with surprise norm (Burda et al., 2018a). However, surprise-based agents tend to be overly curious about noisy or unpredictable observations (Itti and Baldi, 2005; Schmidhuber, 1991). For example, consider an agent watching a television screen showing white noise (noisy-TV problem). The TV is boring, yet the agent cannot predict the screen's content and will be attracted to the TV due to its high surprise norm. This distraction or "fake surprise" is common in partially observable Markov Decision Process (POMDP), including navigation tasks and Atari games (Burda et al., 2018b). Many works have addressed this issue by relying on the learning progress (Achiam and Sastry, 2017; Schmidhuber, 1991) or random network distillation (RND) (Burda et al., 2018b). However, the former is computationally expensive, and the latter requires many samples to perform well. This paper overcomes the "fake surprise" issue by using surprise novelty - a new concept that measures the uniqueness of surprise. To identify surprise novelty, the agent needs to compare the current surprise with surprises in past encounters. One way to do this is to equip the agent with some kind of associative memory, which we implement as an autoencoder whose task is to reconstruct a query surprise. The lower the reconstruction error, the lower the surprise novelty. A further mechanism is needed to deal with the rapid changes in surprise structure within an episode. As an example, if the agent meets the same surprise at two time steps, its surprise novelty should decline, and with a simple autoencoder this will not happen. To remedy this, we add an episodic memory, which stores intra-episode surprises. Given the current surprise, this memory can retrieve similar surprises presented earlier in the episode through an attention mechanism. These surprises act as a context added to the query to help the autoencoder better recognize whether the query surprise has been encountered in the episode or not. The error between the query and the autoencoder's output is de ned as surprise novelty, to which the intrinsic reward is set proportionally. We argue that using surprise novelty as an intrinsic reward is better than surprise norm. As in POMDPs, surprise norms can be very large since the agent cannot predict its environment perfectly, yet there may exist patterns of prediction failure. If the agent can remember these patterns, it will not feel surprised when similar prediction errors appear regardless of the surprise norms. An important emergent property of this architecture is that when random observations are presented (e.g., white noise in the noisy-TV problem), the autoencoder can act as an identity transformation operator, thus e ectively passing the noise through to reconstruct it with low error. We conjecture that the autoencoder is able to do this with the surprise rather than the observation as the surprise space has lower variance, and we show this in our paper. To make our memory system work on the surprise level, we adopt an intrinsic motivation method to generate surprise for the memory. The surprise generator (SG) can be of any kind based on predictive models and is jointly trained with the memory to optimize its own loss function. To train the surprise memory (SM), we optimize the memory's parameters to minimize the reconstruction error. Our contribution is to propose a new concept of surprise novelty for intrinsic motivation. We argue that it re ects better the environment originality than surprise norm (see motivating graphics Fig. 1). In our experiments, the SM helps RND (Burda et al., 2018b) perform well in our challenging noisy-TV problem while RND alone performs poorly. Not only with RND, we consistently demonstrate signi cant performance gain when coupling three di erent SGs with our SM in sparse-reward tasks. Finally, in hard exploration Atari games, we boost the scores of 2 strong SGs, resulting in better performance under the low-sample regime. 2 Methods 2.1 Surprise Novelty Surprise is the di erence between expectation and observation (Ekman and Davidson, 1994). If a surprise repeats, it is no longer a surprise. Based on this intuition, we hypothesize that surprises can be characterized by their novelties, and an agent's curiosity is driven by the surprise novelty rather than the surprising magnitude. Moreover, surprise novelty should be robust against noises: it is small even for random observations. For example, watching a random-channel TV can always be full of surprises as we cannot expect which channel will appear next. However, the agent should soon nd it boring since the surprise of random noises reoccurs repeatedly, and the channels are entirely unpredictable. We propose using a memory-augmented neural network (MANN) to measure surprise novelty. The memory remembers past surprise patterns, and if a surprise can be retrieved from the memory, it is not novel, and the intrinsic motivation should be small. The memory can also be viewed as a reconstruction network. The network can pass its inputs through for random, pattern-free surprises, making them retrievable. Surprise novelty has an interesting property: if some event is unsurprising (the expectation-reality residual is −→ 0 ), its surprise ( −→ 0 with norm 0) is always perfectly retrievable (surprise novelty is 0). In other words, low surprise norm means low surprise novelty. On the contrary, high surprise norm can have little surprise novelty as long as the surprise can be retrieved from the memory either through associative recall or pass-through mechanism. Another property is that the variance of surprise is generally lower than that of observation (state), potentially making the learning on surprise space easier. This property is formally stated as follows. Proposition 1. Let X and U be random variables representing the observation and surprise at the same timestep, respectively. Under an imperfect SG, the following inequality holds: ∀i : ( σXi )2 ≥ (σUi )2 where ( σXi )2 and ( σUi )2 denote the i-th diagonal elements of var(X) and var(U), respectively. Proof. See Appendix E. 2.2 Surprise Generator Since our MANN requires surprises for its operation, it is built upon a prediction model, which will be referred to as Surprise Generators (SG). In this paper, we adopt many wellknown SGs (e.g. RND (Burda et al., 2018b) and ICM (Pathak et al., 2017)) to predict the observation, compute the surprise ut and its norm for every step in the environment. The surprise norm is the Euclidean distance between the expectation and the actual observation: ‖ut‖ = ‖SG (It)−Ot‖ (1) where ut ∈ Rn is the surprise vector of size n, It the input of the SG at step t of the episode, SG (It) and Ot the SG's prediction and the observation target, respectively. The input It is speci c to the SG architecture choice, which can be the current (st) or previous state, action (st−1, at). The observation target Ot is usually a transformation (can be identical or random) of the current state st, which serves as the target for the SG's prediction. The SG is usually trained to minimize: LSG = Et [‖ut‖] (2) Here, predictable observations have minor prediction errors or little surprise. One issue is that a great surprise norm can be simply due to noisy or distractive observations. Next, we propose a remedy for this problem. 2.3 Surprise Memory The surprise generated by the SG is stored and processed by a memory network dubbed Surprise Memory (SM). It consists of an episodic memoryM and an autoencoder network W, jointly optimized to reconstruct any surprise. At each timestep, the SM receives a surprise ut from the SG module and reads content u e t from the memoryM. {uet , ut} forms a surprise query qt to W to retrieve the reconstructed q̃t. This reconstruction will be used to estimate the novelty of surprises forming intrinsic rewards rit. Fig. 2 summarizes the operations of the components of our proposed method. Our 2 memory design e ectively recovers surprise novelty by handling intra and inter-episode surprise patterns thanks toM andW, respectively. M can quickly adapt and recall surprises that occur within an episode. W is slower and focuses more on consistent surprise patterns across episodes during training. Here the query qt can be directly set to the surprise ut. However, this ignores the rapid change in surprise within an episode. Without M, when the SG and W are xed (during interaction with environments), their outputs ut and q̃t stay the same for the same input It. Hence, the intrinsic reward rit also stays the same. It is undesirable since when the agent observes the same input at di erent timesteps (e.g., I1 = I2), we expect its curiosity should decrease in the second visit (ri1 <r i 2). Therefore, we design SM withM to x this issue. The episodic memory M stores representations of surprises that the agent encounters during an episode. For simplicity,M is implemented as a rst-in- rst-out queue whose size is xed as N . Notably, the content of M is wiped out at the end of each episode. Its information is limited to a single episode. M can be viewed as a matrix: M ∈ RN×d, where d is the size of the memory slot. We denote M (j) as the j-th row in the memory, corresponding to the surprise ut−j . To retrieve fromM a read-out uet that is close to ut, we perform content-based attention (Graves et al., 2014) to compute the attention weight as wt (j) = (utQ)M(j)> ‖(utQ)‖‖M(j)‖ . The read-out fromM is then u e t = wtMV ∈ Rn. Here, Q ∈ Rn×d and V ∈ Rd×n are learnable weights mapping between the surprise and the memory space. To force the read-out close to ut, we minimize: LM = Et [‖uet − ut‖] (3) The read-out and the SG's surprise form the query surprise to W: qt = [uet , ut] ∈ R2n. M stores intra-episode surprises to assist the autoencoder in preventing the agent from exploring fake surprise within the episode. Since we train the parameters to reconstruct ut using past surprises in the episode, if the agent visits a state whose surprise is predictable from those in M, ‖uet − ut‖ should be small. Hence, the read-out context uet contains no extra information than ut and reconstructing qt fromW becomes easier as it is equivalent to reconstructing ut. In contrast, visiting diverse states leads to a more novel read-out u e t and makes it more challenging to reconstruct qt, generally leading to higher intrinsic reward. The autoencoder network W can be viewed as an associative memory of surprises that persist across episodes. At timestep t in any episode during training, W is queried with qt to produce a reconstructed memory q̃t. The surprise novelty is then determined as: rit = ‖q̃t − qt‖ (4) which is the norm of the surprise residual q̃t − qt. It will be normalized and added to the external reward as an intrinsic reward bonus. The details of computing and using normalized intrinsic rewards can be found in Appendix C. We implementW as a feed-forward neural network that learns to reconstruct its own inputs. This kind of autoencoder has been shown to be equivalent to an associative memory that supports memory encoding and retrieval through attractor dynamics (Radhakrishnan et al., 2020). The query surprise is encoded to the weights of the network via backpropagation as we minimize the reconstruction loss below: LW = Et [ rit ] = Et [‖W (qt)− qt‖] (5) Here, q̃t = W (qt). Intuitively, it is easier to retrieve non-novel surprises experienced many times in past episodes. Thus, the intrinsic reward is lower for states that leads to these familiar surprises. On the contrary, rare surprises are harder to retrieve, which results in high reconstruction errors and intrinsic rewards. W is like a long-term inter-episode associative memory. Unlike slot-based memories, it has a xed memory capacity, can compress information and learn data representations. We could store the surprise in a slot-based memory across episodes, but the size of this memory would be autonomous, and the data would be stored redundantly. Hence, the quality of the stored surprise will reduce as more and more observations come in. Readers can refer to Appendix A to see the architecture details and how W can be interpreted as implementing associative memory. The whole system SG+SM is trained end-to-end by minimizing the following loss: L = LSG +LM +LW . Here, we block the gradients from LW backpropagated to the parameters of SG to avoid trivial reconstructions of qt. Pseudocode of our algorithm is presented in Appendix B. 3 Experimental Results 3.1 Noisy-TV: Robustness against Noisy Observations We use Noisy-TV, an environment designed to fool exploration methods (Burda et al., 2018b; Savinov et al., 2018), to con rm that our method can generate intrinsic rewards that (1) are more robust to noises and (2) can discriminate rare and common observations through surprise novelty. We simulate this problem by employing a 3D maze environment with a random map structure. The TV is not xed in speci c locations in the maze to make it more challenging. Instead, the agent brings the TV with it and can choose to watch TV anytime. Hence, there are three basic actions (turn left, right, and move forward) plus an action: watch TV. When taking this action, the agent will see a white noise image sampled from standard normal distribution and thus, the number of TV channels can be considered in nity. The agent's state is an image of its viewport, and its goal is to search for a red box randomly placed in the maze (+1 reward if the agent reaches the goal). The baseline is RND (Burda et al., 2018b), a simple yet strong SG that is claimed to obviate the stochastic problems of Noisy-TV. Our SG+SM model uses RND as the SG, so we name it RND+SM. Since our model and the baseline share the same RND architecture, the di erence in performance must be attributed to our SM. Fig. 3 (a) illustrates the mean-normalized intrinsic rewards (MNIR)1 measured at di erent states in our Noisy-TV environment. The rst two states are noises, the following three states are common walls, and the last two are ones where the agent sees the box. The 1See Appendix C for more information on this metric. MNIR bars show that both models are attracted mainly by the noisy TV, resulting in the highest MNIRs. However, our model with SM su ers less from noisy TV distractions since its MNIR is lower than RND's. We speculate that SM is able to partially reconstruct the whitenoise surprise via pass-through mechanism, making the normalized surprise novelty generally smaller than the normalized surprise norm in this case. That mechanism is enhanced in SM with surprise reconstruction (see Appendix D.1 for explanation). On the other hand, when observing red box, RND+SM shows higher MNIR than RND. The di erence between MNIR for common and rare states is also more prominent in RND+SM than in RND because RND prediction is not perfect even for common observations, creating relatively signi cant surprise norms for seeing walls. The SM xes that issue by remembering surprise patterns and successfully retrieving them, producing much smaller surprise novelty compared to those of rare events like seeing red box. Consequently, the agent with SM outperforms the other by a massive margin in task rewards (Fig. 3 (b)). As we visualize the number of watching TV actions and the value of the intrinsic reward by RND+SM and RND over training time, we realize that RND+SM helps the agent take fewer watching actions and thus, collect smaller amounts of intrinsic rewards compared to RND. We also verify that our proposed method outperforms a simpli ed version of SM using counts to measure surprise novelty and a vanilla baseline that does not use intrinsic motivation. The details of these results are given in Appendix D.1. 3.2 MiniGrid: Compatibility with Different Surprise Generators We show the versatility of our framework SG+SM by applying SM to 4 SG backbones: RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and autoencoderAE (see Appendix D.2 for implementation details). We test the models on three tasks from MiniGrid environments: Key-Door (KD), Dynamic-Obstacles (DO) and Lava-Crossing (LC) (Chevalier-Boisvert et al., 2018). If the agent reaches the goal in the tasks, it receives a +1 reward. Otherwise, it can be punished with negative rewards if it collides with obstacles or takes too much time to nish the task. These environments are not stochastic as the Noisy-TV but they still contain other types of distraction. For example, in KD, the agent can be attracted to irrelevant actions such as going around to drop and pick the key. In DO, instead of going to the destination, the agent may chase obstacle balls ying around the map. In LC the agent can commit unsafe actions like going near lava areas, which are di erent from typical paths. In any case, due to reward sparsity, intrinsic motivation is bene cial. However, surprise alone may not be enough to guide an e cient exploration since the observation can be too complicated for SG to minimize its prediction error. Thus, the agent quickly feels surprised, even in unimportant states. Table 1 shows the average returns of the models for three tasks. The Baseline is the PPO backbone trained without intrinsic reward. RND, ICM, NGU and AE are SGs providing the PPO with surprise-norm rewards while our method SG+SM uses surprise-novelty rewards. The results demonstrate that models with SM often outperform SG signi cantly and always contain the best performers. Notably, in the LC task, SGs hinder the performance of the Baseline because the agents are attracted to dangerous vivid states, which are hard to predict but cause the agent's death. The SM models avoid this issue and outperform the Baseline for the case of ICM+SM. Compared to AE, which computes intrinsic reward based on the novelty of the state, AE+SM shows a much higher average score in all tasks. That manifests the importance of modeling the novelty of surprise instead of states. To analyze the di erence between the SG+SM and SG's MNIR structure, we visualize the MNIR for each cell in the map of Key-Door in Appendix's Figs. 5 (b) and (c). We create a synthetic trajectory that scans through all the cells in the big room on the left and, at each cell, uses RND+SM and RND models to compute the corresponding surprise-norm and surprise-novelty MNIRs, respectively. As shown in Fig. 5 (b), RND+SM selectively identi es truly surprising events, where only a few cells have high surprise-novelty MNIR. Here, we can visually detect three important events that receive the most MNIR: seeing the key (bottom row), seeing the door side (in the middle of the rightmost column) and approaching the front of the door (the second and fourth rows). Other less important cells are assigned very low MNIR. On the contrary, RND often gives high surprise-norm MNIR to cells around important ones, which creates a noisy MNIR map as in Fig. 5 (c). As a result, RND's performance is better than Baseline, yet far from that of RND+SM. Another analysis of how surprise novelty discriminates against surprises with similar norms is given in Appendix's Fig. 8. 3.3 Atari: Sample-efficient Benchmark We adopt the sample-e ciency Atari benchmark (Kim et al., 2019) on six hard exploration games where the training budget is only 50 million frames. We use our SM to augment 2 SGs: RND (Burda et al., 2018b) and LWM (Ermolov and Sebe, 2020). Unlike RND, LWM uses a recurrent world model and forward dynamics to generate surprises. Details of the SGs, training and evaluation are in Appendix D.3. We run the SG and SG+SM in the same codebase and setting. Table 2 reports our and representative results from prior works, showing SM-augmented models outperform their SG counterparts in all games (same codebase). In Frostbite and Montezuma Revenge, RND+SM's score is almost twice as many as that of RND. For LWM+SM, games such as Gravitar and Venture observe more than 40% improvement. Overall, LWM+SM and RND+SM achieve the best mean and median human normalized score, improving 16% and 22% w.r.t the best SGs, respectively. Notably, RND+SM shows signi cant improvement for the notorious Montezuma Revenge. We also verify the bene t of the SM in the long run for Montezuma Revenge and Frostbite. As shown in Fig. 4 (a,b), RND+SM still signi cantly outperforms RND after 200 million training frames, achieving average scores of 10,000 and 9,000, respectively. The result demonstrates the scalability of our proposed method. When using RND and RND+SM to compute the average MNIR in several rooms in Montezuma Revenge (Fig. 1), we realize that SM makes MNIR higher for surprising events in rooms with complex structures while depressing the MNIR of fake surprises in dark rooms. Here, even in the dark room, the movement of agents (human or spider) is hard to predict, leading to a high average MNIR. On the contrary, the average MNIR of surprise novelty is reduced if the prediction error can be recalled from the memory. Finally, measuring the running time of the models, we notice little computing overhead caused by our SM. On our Nvidia A100 GPUs, LWM and LWM+SM's average time for one 50M training are 11h 38m and 12h 10m, respectively. For one 200M training, RND and RND+SM's average times are 26h 24m and 28h 1m, respectively. These correspond to only 7% more training time while the performance gap is signi cant (4000 scores). 3.4 Ablation Study Role of Memories Here, we use Minigrid's Dynamic-Obstacle task to study the role of M and W in the SM (built upon RND as the SG). Disabling W, we directly use ‖qt‖ = ‖[uet , ut]‖ as the intrinsic reward, and name this version: SM (no W). To ablate the e ect ofM, we remove uet from qt and only use qt = ut as the query to W, forming the version: SM (no M). We also consider di erent episodic memory capacity and slot size N -d= {32− 4, 128− 16, 1024− 64}. As N and d increase, the short-term context expands and more past surprise information is considered in the attention. In theory, a bigM is helpful to capture long-term and more accurate context for constructing the surprise query. Fig. 4 (c) depicts the performance curves of the methods after 10 million training steps. SM (no W) and SM (noM) show weak signs of learning, con rming the necessity of both modules in this task. Increasing N -d from 32−4 to 1024−64 improves the nal performance. However, 1024− 64 is not signi cantly better than 128− 16, perhaps because it is unlikely to have similar surprises that are more than 128 steps apart. Thus, a larger attention span does not provide a bene t. As a result, we keep using N = 128 and d = 16 in all other experiments for faster computing. We also verify the necessity ofM and W in Montezuma Revenge and illustrate how M generates lower MNIR when 2 similar event occurs in the same episode in Key-Door (see Appendix D.4). No Task Reward In this experiment, we remove task rewards and merely evaluate the agent's ability to explore using intrinsic rewards. The task is to navigate 3D rooms and get a +1 reward for picking an object (Chevalier-Boisvert, 2018). The state is the agent's image view, and there is no noise. Without task rewards, it is crucial to maintain the agent's interest in unique events of seeing the objects. In this partially observable environment, surprise-prediction methods may struggle to explore even without noise due to lacking information for good predictions, leading to usually high prediction errors. For this testbed, we evaluate random exploration agent (Baseline), RND and RND+SM in 2 settings: 1 room with three objects (easy), and 4 rooms with one object (hard). To see the di erence among the models, we compare the cumulative task rewards over 100 million steps (see Appendix D.4 for details). RND is even worse than Baseline in the easy setting because predicting causes high biases (intrinsic rewards) towards the unpredictable, hindering exploration if the map is simple. In contrast, RND+SM uses surprise novelty, generally showing smaller intrinsic rewards (see Appendix Fig. 12 (right)). Consequently, our method consistently demonstrates signi cant improvements over other baselines (see Fig. 4 (d) for the hard setting). 4 Related works Intrinsic motivation approaches usually give the agent reward bonuses for visiting novel states to encourage exploration. The bonus is proportional to the mismatch between the predicted and reality, also known as surprise (Schmidhuber, 2010). One kind of predictive model is the dynamics model, wherein the surprise is the error of the models as predicting the next state given the current state and action (Achiam and Sastry, 2017; Stadie et al., 2015). One critical problem of these approaches is the unwanted bias towards transitions where the prediction target is a stochastic function of the inputs, commonly found in partially observable environments. Recent works focus on improving the features of the predictor's input by adopting representation learning mechanisms such as inverse dynamics (Pathak et al., 2017), variational autoencoder, random/pixel features (Burda et al., 2018a), or whitening transform (Ermolov and Sebe, 2020). Although better representations may improve the reward bonus, they cannot completely solve the problem of stochastic dynamics and thus, fail in extreme cases such as the noisy-TV problem (Burda et al., 2018b). Besides dynamics prediction, several works propose to predict other quantities as functions of the current state by using autoencoder (Nylend, 2017), episodic memory (Savinov et al., 2018), and random network (Burda et al., 2018b). Burda et al. (2018) claimed that using a deterministic random target network is bene cial in overcoming stochasticity issues. Other methods combine this idea with episodic memory and other techniques, achieving good results in large-scale experiments (Badia et al., 2020; 2019). From an information theory perspective, the notation of surprise can be linked to information gain or uncertainty, and predictive models can be treated as parameterized distributions (Achiam and Sastry, 2017; Houthooft et al., 2016; Still and Precup, 2012). Furthermore, to prevent the agent from unpredictable observations, the reward bonus can be measured by the progress of the model's prediction (Achiam and Sastry, 2017; Lopes et al., 2012; Schmidhuber, 1991). However, these methods are complicated and hard to scale, requiring heavy computing. A di erent angle to handle stochastic observations during exploration is surprsie minimization (Berseth et al., 2020; Rhinehart et al., 2021). In this direction, the agents get bigger rewards for seeing more familiar states. Such a strategy is somewhat opposite to our approach and suitable for unstable environments where the randomness occurs separately from the agents' actions. These earlier works rely on the principle of using surprise as an incentive for exploration and di er from our principle that utilizes surprise novelty. Also, our work augments these existing works with a surprise memory module and can be used as a generic plug-in improvement for surprise-based models. We note that our memory formulation di ers from the memorybased novelty concept using episodic memory (Badia et al., 2019), momentum memory (Fang et al., 2022), or counting (Bellemare et al., 2016; Tang et al., 2017) because our memory operates on the surprise level, not the state level. In our work, exploration is discouraged not only in frequently visited states but also in states whose surprises can be reconstructed using SM. Our work provides a more general and learnable novelty detection mechanism, which is more exible than the nearest neighbour search or counting lookup table. 5 Discussion This paper presents Surprise Generator-Surprise Memory (SG+SM) framework to compute surprise novelty as an intrinsic motivation for the reinforcement learning agent. Exploring with surprise novelty is bene cial when there are repeated patterns of surprises or random observations. For example, in the Noisy-TV problem, our SG+SM can harness the agent's tendency to visit noisy states such as watching random TV channels while encouraging it to explore rare events with distinctive surprises. We empirically show that our SM can supplement three surprise-based SGs to achieve more rewards in fewer training steps in three grid-world environments. In 3D navigation without external reward, our method signi cantly outperforms the baselines. On two strong SGs, our SM also achieve superior results in hard-exploration Atari games within 50 million training frames. Even in the long run, our method maintains a clear performance gap from the baselines, as shown in Montezuma Revenge and Frostbite. If we view surprise as the rst-order error between the observation and the predicted, surprise novelty the retrieval error between the surprise and the reconstructed memory, is essentially the second-order error. It would be interesting to investigate the notion of higher-order errors, study their theoretical properties, and utilize them for intrinsic motivation in our future work. A W as Associative Memory This section will connect the associative memory concept to neural networks trained with the reconstruction loss as in Eq. 5. We will show how the neural network (W) stores and retrieves its data. We will use 1-layer feed-forward neural network W to simplify the analysis, but the idea can extend to multi-layer feed-forward neural networks. For simplicity, assumingW is a square matrix, the objective is to minimize the di erence between the input and the output of W : For simplicity, assuming W is a square matrix, the objective is to minimize the di erence between the input and the output of W : L = ‖Wx− x‖22 (6) Using gradient descent, we update W as follow, W ←W − α ∂L ∂W ←W − 2α (Wx− x)x> ←W − 2αWxx> + 2αxx> ←W ( I − 2αxx> ) + 2αxx> where I is the identity matrix, x is the column vector. If a batch of inputs {xi}Bi=1 is used in computing the loss in Eq. 6, at step t, we update W as follows, Wt =Wt−1 (I − αXt) + αXt where Xt = 2 ∑B i=1 xix > i . From t = 0, after T updates, the weight becomes WT =W0 T∏ t=1 (I − αXt)− α2 T∑ t=2 XtXt−1 T∏ k=t+1 (I − αXk) + α T∑ t=1 Xt (7) Given the form of Xt, Xt is symmetric positive-de nite. Also, as α is often very small (0<α 1), we can show that ‖I − αXt‖ < 1 − λmin (αXt) < 1. This means as T →∞, ∥∥∥W0∏Tt=1 (I − αXt)∥∥∥→ 0 and thus, WT → α2∑Tt=2XtXt−1∏Tk=t+1 (I − αXk) + α ∑T t=1Xt independent from the initialization W0. Eq. 7 shows how the data (Xt) is integrated into the neural network weight Wt. The other components such as α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk) can be viewed as additional encoding noise. Without these components (by assuming α is small enough), WT ≈ α T∑ t=1 Xt = α T∑ t=1 B∑ i=1 xi,tx > i,t or equivalently, we have the Hebbian update rule: W ←W + xi,t ⊗ xi,t where W can be seen as the memory, ⊗ is the outer product and xi,t is the data or item stored in the memory. This memory update is the same as that of classical associative memory models such as Hop eld network and Correlation Matrix Memory (CMM) . Given a query q, we retrieve the value in W as output of the neural network: q′ = q>W = q>R+ α T∑ t=1 qXt = q>R+ 2α T∑ t=1 B∑ i=1 q>xi,tx > i,t where R = W0 ∏T t=1 (I − αXt) − α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk). If q is presented to the memory W in the past as some xj , q ′ can be represented as: q′ = q>R+ 2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t + 2αq > (qq>) = q>R︸︷︷︸+2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t︸ ︷︷ ︸+2α ‖q‖ q > noise cross talk Assuming that the noise is insigni cant thanks to small α, we can retrieve exactly q given that all items in the memory are orthogonal2. As a result, after scaling q′ with 1/2α, the retrieval error ( ∥∥∥ q′2α − q∥∥∥) is 0. If q is new to W , the error will depend on whether the items stored in W are close to q. Usually, the higher the error, the more novel q is w.r.t W . B SM's Implementation Detail In practice, the short-term memoryM is a tensor of shape [B,N, d] where B is the number of actors, N the memory length and d the slot size. B is the SG's hyperparameters and tuned depending on tasks based on SG performance. For example, for the Noisy-TV, we tune RND as the SG, obtaining B = 64 and directly using them for M. N and d are the special hyperparameters of our method. As mentioned in Sec. 3.4, we x N = 128 and d = 16 in all experiments. As B increases in large-scale experiments, memory storage for M can be demanding. To overcome this issue, we can use the uniform writing trick to optimally preserve information while reducing N (Le et al., 2019). Also, for W, by using a small hidden size, we can reduce the requirement for physical memory signi cantly. Practically, in all experiments, we implement W as a 2-layer feedforward neural network with a hidden size of 32 (2n → 32 → 2n). The activation is tanh. With n = 512 d = 16, the number of parameters of W is only about 65K. Also, Q ∈ Rn×d and V ∈ Rd×n have about 8K parameters. In total, our SM only introduces less than 90K trainable parameters, which are marginal to that of the SG and policy/value networks (up to 10 million parameters). The join training of SG+SM is presented in Algo. 2. We note that vector notations in the algorithm are row vectors. For simplicity, the algorithm assumes 1 actor. In practice, our algorithm works with multiple actors and mini-batch training. C Intrinsic Reward Normalization Following (Burda et al., 2018b), to make the intrinsic reward on a consistent scale, we normalized the intrinsic reward by dividing it by a running estimate of the standard deviations 2By certain transformation, this condition can be reduced to linear independence Algorithm 1 Intrinsic rewards computing via SG+SM framework. Require: ut, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Compute LSG = ‖ut‖ 2: QueryM with ut, retrieve uet = wtMV where wt is the attention weight 3: Compute LM = ‖uet − ut.detach()‖ 4: Query W with qt = [uet , ut], retrieve q̃t =W(qt) 5: Compute intrinsic reward rit = LW = ‖q̃t − qt.detach()‖ 6: return LSG, LM, LW Algorithm 2 Jointly training SG+SM and the policy. Require: bu er, policy πθ, surprise-based predictor SG, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Initialize πθ, SG, Q, W 2: for iteration = 1, 2, ... do 3: for t = 1, 2, ...T do 4: Execute policy πθ to collect st, at, rt, forming input It = st, ... and target Ot 5: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 6: Compute intrinsic reward rit using Algo. 1 7: Compute nal reward rt ← rt + βrit/rstdt 8: Add (It, Ot, st−1, st, at, rt) to bu er 9: Add utQ toM 10: if done episode then clearM 11: end for 12: for k = 1, 2, ..,K do 13: Sample It, Ot from bu er 14: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 15: Compute LSG, LM, LW using Algo. 1 16: Update SG, Q and W by minimizing the loss L = LSG + LM + LW 17: Update πθ with sample (st−1, st, at, rt) from bu er using backbone algorithms 18: end for 19: end for of the intrinsic returns. This normalized intrinsic reward (NIR) will be used for training. In addition, there is a hyperparameter named intrinsic reward coe cient to scale the intrinsic contribution relatively to the external reward. We denote the running mean's standard deviations and intrinsic reward coe cient as rstdt and β, respectively, in Algo. 2. In our experiments, if otherwise stated, β = 1. We note that when comparing the intrinsic reward at di erent states in the same episode (as in the experiment section), we normalize intrinsic rewards by subtracting the mean, followed by a division by the standard deviation of all intrinsic rewards in the episode. Hence, the mean-normalized intrinsic reward (MNIR) in these experiments is di erent from the one used in training and can be negative. We argue that normalizing with mean and std. of the episode's intrinsic rewards is necessary to make the comparison reasonable. For example, in an episode, method A assigns all steps with intrinsic rewards of 200; and method B assigns novel steps with intrinsic rewards of 1 while others 0. Clearly, method A treats all steps in the episode equal, and thus, it is equivalent to giving no motivation for all of the steps in the episode (the learned policy will not motivate the agent to visit novel states). On the contrary, method B triggers motivation for novel steps in the episodes (the learned policy will encourage visits to novel states). Without normalizing by mean subtraction, it is tempting to conclude that the relative intrinsic reward of method A for a novel step is higher, which is technically incorrect. D Experimental Details D.1 Noisy-TV We create the Noisy-TV environment by modifying the Maze environment (MazeS3Fast-v0) in the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The backbone RL algorithm is PPO. We adopt a public code repository for the implementation of PPO and RND (MIT License)3. In this environment, the state is an image of the agent's viewport. The details of architecture and hyperparameters of the backbone and RND is presented in Table 4. Most of the setting is the same as in the repository. We only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) to suit our hardware and the task. After tuning with RND, we use the same setting for our RND+SM. Fig. 6 reports all results for this environment. Fig. 6 (a) compares the nal intrinsic reward (IR) generated by RND and RND+SM over training time. Overall, RND's IR is always higher than RND+SM's, indicating that our method is signi cantly reduces the attention of the agent to the noisy TV by assigning less IR to watching TV. Fig. 6 (b) compares the number of noisy actions between two methods where RND+SM consistently shows fewer watching TV actions. That con rms RND+SM agent is less distracted by the TV. As mentioned in the main text, RND+SM is better at handling noise than RND. Note that RND aims to predict the transformed states by minimizing ‖SG (st)− fR(st)‖ where fR is a xed neural network initialized randomly. If RND can learns the transformation, it can passthrough the state, which is similar to reconstruction in an autoencoder. However, learning fR can be harder and require more samples than learning an identity transformation since fR is non-linear and complicated. Hence, it may be more challenging for RND to pass-through the noise than SM. Another possible reason lies in the operating space (state vs. surprise). If we treat white noise as a random variable X, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ] − X where Y is a random factor that a ects the training of the surprise generator. The factor Y makes the SG produce imperfect reconstruction E [X|Y ]4. Here, SG and SM learn to reconstruct X and U , respectively. We can prove that the variance of each feature dimension in U is smaller than that of X (see Sec. E). Learning an autoencoder on surprise space is more bene cial than in state space since the data has less variance and thus, it may require less data points to learn the data distribution. Fig. 6 (c) reports performance of all baselines. Besides RND and RND+SM, we also include PPO without intrinsic reward as the vanilla Baseline for reference. In addition, we investigate a simple implementation of SM using count-based method to measure surprise novelty. Concretely, we use SimHash algorithm to count the number of surprise c(ut) in a similar manner as (Bellemare et al., 2016) and name the baseline RND+SM (count). The 3https://github.com/jcwleo/random-network-distillation-pytorch 4In this case, the perfect reconstruction is E [X] intrinsic reward is then β/ √ c(ut). We tune the hyperparameter β = {0.5, 1, 5} and the hash matrix size kh = {32, 64, 128, 256} and use the same normalization and training process to run this baseline. We report the learning curves of the best variant with β = 0.5 and kh = 128. The result demonstrates that the proposed SM using memory-augmented neural networks outperforms the count-based SM by a signi cant margin. One possible reason is that count-based method cannot handle white noise: it always returns high intrinsic rewards. In contrast, our SM can somehow reconstruct white noise via pass-through mechanism and thus reduces the impact of fake surprise on learning. Also, the proposed SM is more exible than the count-based counterpart since it learns to reconstruct from the data rather than using a x counting scheme. The result also shows that RND+SM outperforms the vanilla Baseline. Although the improvement is moderate (0.9 vs 0.85), the result is remarkable since the Noisy-TV is designed to fool intrinsic motivation methods and among all, only RND+SM can outperform the vanilla Baseline. D.2 MiniGrid The tasks in this experiment are from the MiniGrid library (Apache License) (ChevalierBoisvert et al., 2018). In MiniGrid environments, the state is a description vector representing partial observation information such as the location of the agents, objects, moving directions, etc. The three tasks use hardest maps: • DoorKey: MiniGrid-DoorKey-16x16-v0 • LavaCrossing: MiniGrid-LavaCrossingS11N5-v0 • DynamicObstacles: MiniGrid-Dynamic-Obstacles-16x16-v0 The SGs used in this experiment are RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and AE. Below we describe the input-output structure of these SGs. • RND: It = st and Ot = fR (st) where st is the current state and fR is a neural network that has a similar structure as the prediction network, yet its parameters are initialized randomly and xed during training. • ICM: It = (st−1, at) and Ot = st where s is the embedding of the state and a the action. We note that in addition to the surprise loss (Eq. 2), ICM is trained with inverse dynamics loss. • NGU: This agent reuses the RND as the SG (It = st and Ot = fR (st)) and combines the surprise norm with an KNN episodic reward. When applying our SM to NGU, we only take the surprise-based reward as input to the SM. The code for NGU is based on this public repository https://github.com/opendilab/DI-engine. • AE: It = st and Ot = st where s is the embedding of the state. This SG can be viewed as an associative memory of the observation, aiming to remember the states. This baseline is designed to verify the importance of surprise modeling. Despite sharing a similar architecture, it di ers from our SM, which operates on surprise and have an augmented episodic memory to support reconstruction. The backbone RL algorithm is PPO. The code for PPO and RND is the same as in Sec. D.1. We adopt a public code repository for the implementation of ICM (MIT License)5. We implement AE ourselves using a 3-layer feed-forward neural network. For the SGs, we only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) for the DoorKey task. We also tune the architecture of the AE (number of layers: 1,2 or 3, activation tanh or ReLU) on the DoorKey task. After tuning the SGs, we use the same setting for our SG+SM. The detailed con gurations of the SGs for this experiment are reported in Table 3 and Table 4. The full learning curves of the backbone (Baseline), SG and SG+SM are given in Fig. 7. To visualize the di erence between surprise and surprise residual vectors, we map these in the synthetic trajectory to 2-dimensional space using t-SNE projection in Fig. 8. The surprise points show clustered patterns for high-MNIR states, which con rms our hypothesis that there exist familiar surprises (they are highly surprising due to high norm, yet repeated). In contrast, the surprise residual estimated by the SM has no high-MNIR clusters. The SM transforms clustered surprises to scatter surprise residuals, resulting in a broader range of MNIR, thus showing signi cant discrimination on states that have similar surprise norm. D.3 Atari The Atari 2600 Games task involves training an agent to achieve high game scores. The state is a 2d image representing the screen of the game. 5https://github.com/jcwleo/curiosity-driven-exploration-pytorch Surprise Surprise Residual SG and RL backbone implementations We use 2 SGs: RND and LWM. RND uses a PPO backbone as in previous sections. On the other hand, LWM uses DQN backbone with CNN-based encoder and GRU-based value function. The LWM SG uses GRU to model forward dynamics of the environment and thus its input is: It = (st−1, at, ht−1) where st−1 is the embedding of the previous state, at the current action, and ht−1 the hidden state of the world model GRU. The target Ot is the embedding of the current state st. RND follows the same implementation as in previous experiments. We use the public code of LWM provided by the authors6 to implement LWM. The hyperparameters of RND and LWM are tuned by the repository's owner (see Table 4 for RND and refer to the code or the original paper (Ermolov and Sebe, 2020) for the details of LWM implementation). We augment them with our SM of default hyperparameters N = 128, d = 16. Training and evaluation We follow the standard training for Atari games, such as stacking four frames and enabling sticky actions. All the environments are based on OpenAI's gym-atari's NoFrameskip-v4 variants (MIT Liscence)7 . After training, we evaluate the models by measuring the average return over 128 episodes and report the results in Table. 2. Depending on the setting, the models are trained for 50 or 200 million frames. Results Fig. 9 demonstrates the learning curves of all models in 6 Atari games under the low-sample regime. LWM+SM and RND+SM clearly outperfrom LWM and RND in Frostbite, Venture, Gravitar, Solaris and Frostbite, Venture, Gravitar and MontezumaRevenge, respectively. Table 5 reports the results of more baselines. D.4 Ablation study Role of Memories We conduct more ablation studies to verify the need for the shortM and long-term (W) memory in our SM. We design additional baselines SM (no W) and SM (no M) (see Sec. 3.4), and compare them with the SM with full features in Montezuma Revenge and Frostbite task. Fig. 10 (a) shows that only SM (full) can reach an average score of more than 5000 after 50 million training frames. Other ablated baselines can only achieve around 2000 scores. 6https://github.com/htdt/lwm 7https://github.com/openai/gym We also shows the impact of the episodic memory in decreasing the intrinsic rewards for similar states as discussed in Sec. 2.3. We select 3 states in the MiniGrid's KeyDoor task and computes the MNIR for each state, visualized in Fig. 11. At the step-1 state, the MNIR is low since there is nothing special in the view of the agent. At the step-15 state, the agent rst sees the key, and get a high MNIR. At the step-28 state, the agent drops the key and sees the key again. This event is still more interesting than the step-1 state. However, the view is similar to the one in step 15, and thus, the MNIR decreases from 0.7 to 0.35 as expected. No Task Reward The tasks in this experiment are from the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The two tasks are: • Easy: MiniWorld-PickupObjs-v0 • Hard: MiniWorld-FourRooms-v0 The backbone and SG are the same as in Sec. D.1. We remove the task/external reward in this experiment. For the Baseline, without task reward, it receives no training signal and thus, showing a similar behavior as a random agent. Fig. 12 illustrates the running average of cumulative task return and the intrinsic reward over training steps. In the Easy mode, the random Baseline can even perform better than RND, which indicates that biased intrinsic reward is not always helpful. RND+SM, in both modes, shows superior performance, con rming that its intrinsic reward is better to guide the exploration than that of RND. E Theoretical Property of Surprise Space's Variance Let X be a random variable representing the observation at some timestep, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ]−X where Y is a random factor that a ect the prediction of SG and makes it produce imperfect reconstruction E [X|Y ] instead of E [X]. For instance, in the case of an autoencoder AE as the SG, X and U are stand AE(st)− st, respectively. Let us denote Z = E (X|Y ), then E [Z|Y ] = Z and E [ Z2|Y ] = Z2. We have var (X) = var(X − Z + Z) = var(X − Z) + var(Z) + 2cov(X − Z,Z) = var(X − Z) + var(Z) + 2E[(X − Z)Z]− 2E[X − Z]E[Z] Using the Law of Iterated Expectations, we have E[X − Z] = E[E[X − Z|Y ]] = E[E [X|Y ]− E [Z|Y ]] = E [Z − Z] = 0 and E[(X − Z)Z] = E[E[(X − Z)Z|Y ]] = E[E[XZ − Z2|Y ]] = E[E (XZ|Y )− E ( Z2|Y ) ] = E[ZE (X|Y )− Z2] = E[Z2 − Z2] = 0 Therefore, var (X) = var(X − Z) + var(Z) Let CXii , C X−Z ii and C Z ii denote the diagonal entries of these covariance matrices, they are the variances of the components of the random vector X, X −Z and Z, respectively. That is, ( σXi )2 = ( σX−Zi )2 + ( σZi )2 ⇒ ( σXi )2 ≥ (σX−Zi )2 = (σUi )2 In our setting, X and U represents observation and surprise spaces, respectively. Therefore, the variance of each feature dimension in surprise space is smaller than that of observation space. The equality is obtained when ( σZi )2 = 0 or E (X|Y ) = E (X). That is, the SG's prediction is perfect, which is unlikely to happen in practice. F Limitations Our method assumes that surprises have patterns and can be remembered by our surprise memory. There might exist environments beyond those studied in this paper where this assumption may not hold, or surprise-based counterparts already achieve optimal exploration (e.g., perfect SG) and thus do not need SM for improvement (e.g., Freeway game). In addition, M and W require additional physical memory (RAM/GPU) than SG methods. Finally, a plug-in module like SM introduces more hyperparameters, such as N and d. Although we nd the default values of N=128 and d=16 work well across all experiments in this paper, we recommend adjustments if users apply our method to novel domains.
1. What is the main contribution of the paper regarding curiosity-based exploration methods? 2. What are the strengths and weaknesses of the proposed approach, particularly in its conceptual parts and descriptions? 3. Do you have any concerns about the novelty and reproducibility of the paper's findings? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any additional related works that the paper should cite and compare, especially in the context of surprise minimization?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This method proposes some modifications to curiosity-based exploration methods to help combat some of the corner cases that they don't deal well with, which also results in increased performance. The motivation for this work is that exploration is still a really large challenge and reinforcement learning and even some of the best-proposed methods have some challenges in being able to explore properly in all types of environments (MDPs). The method proposed in this paper introduces a new metric related to surprise novelty, and the surprise novelty helps calculate and reduce the noisy TV problem for curiosity-based metrics. The paper does introduce some interesting ideas, and there's some evaluation at the end of the paper to show that this surprise novelty metric is less susceptible to some of the issues with normal curiosity base methods. Strengths And Weaknesses pros The method in the paper does make some analysis in order to be able to correct and adjust some of the challenges with applying curiosity-based exploration bonuses in different types of environments. An adequate amount of experimental analysis at the end of the paper indicates the potential for such a method to support a contribution over prior curiosity base metrics. cons Generally, some of the conceptual parts and description of the method are challenging to follow. In particular, some of the description around surprise and surprise novelty and what is the mathematical difference between these concepts is not thoroughly explained. This makes it challenging to understand the paper's novelty and reuse the findings of the paper and future work by the community . There is additional related work that this paper should cite and compare. In particular, if the paper is citing the noisy TV problem and the general aspect that environments can have stochastic elements in them, then the paper should consider comparing to additional methods that use surprise minimization[1,2]. [1] Berseth, G., Geng, D., Devin, C. M., Rhinehart, N., Finn, C., Jayaraman, D., & Levine, S. (2021). SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments. International Conference on Learning Representations [2] Rhinehart, N., Wang, J., Berseth, G., Co-Reyes, J. D., Hafner, D., Finn, C., & Levine, S. (2021). Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments. Advances in Neural Information Processing Systems, 34. Clarity, Quality, Novelty And Reproducibility The introduction for this paper is less than simple to understand. there is some immediate discussion between surprise and surprise novelty, yet neither of these concepts are explained in sufficient detail within the introduction. The introduction then goes on to somewhat discuss the importance of considering these two different aspects and why memory is helpful but not why memory is necessary. More information is needed to see why the diagram in figure 2 is an ideal design to be able to memorize and recover surprising novelty. The paper should also cite related work on being able to memorize normal trajectories in some type of sequence. For example, it was not clear until the middle of the second page that surprised novelty might be the second-order version of surprise. It's not obvious if proposition one holds. We don't have much information about what X and U should be. The surprise norm appears to be similar to the prediction mechanism from RND. The RND network is learning to estimate the surprise expectation and then comparing that to the current observation. More details that describe the difference between RND and the prediction comparison mechanism used in this work are needed to understand the novelty. The details around the discussion of how the episodic memory is used and trained are difficult to understand. Why is there an additional model for training to predict some of the memory while also that a buffer of the most recent history of the agent is kept in order to be able to compute the surprise novelty? A diagram to further explain the process and necessary components to be able to predict or produce the surprise novelty would be very helpful to the reader. The writing in section 2 is also challenging to understand which pieces are the novel pieces being introduced in this work? The authors should consider editing a background or prior methods section. An example to include in this section is the autoencoder training section. Training autoencoders is not new and has been around for many years. The comparison on the paper should select more environments that are used in prior (ICM and RND) papers. Table 2, and in general for the results of the paper, needs to include the confidence information and statistics over this analysis. How many random seeds are used to run this analysis? How much does the distribution of training in these different methods overlap? Preferably a t-test should be performed over the different methods in the paper so we can understand how statistically confident we can be in these results. In addition, the results in figure 5 do not appear to show converged policies. It is possible that prior methods end up outperforming the method proposed in this paper if more training time is given. This makes the results shown in figure 5 difficult to use in the assessment. Given that a large motivation for this paper is to deal with undesired or unusual stochastic elements of the environment, this paper should also cite the new line of work on surprise minimization[1,2] that, by default, has a well-conceptualized solution to this problem.
ICLR
Title Intrinsic Motivation via Surprise Memory Abstract We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits e cient exploring behaviors and signi cantly boosts the nal performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games. 1 Introduction What motivates agents to explore? Successfully answering this question would enable agents to learn e ciently in formidable tasks. Random explorations such as -greedy are ine cient in high dimensional cases, failing to learn despite training for hundreds of million steps in sparse reward games (Bellemare et al., 2016). Alternative approaches propose to use intrinsic motivation to aid exploration by adding bonuses to the environment's rewards (Bellemare et al., 2016; Stadie et al., 2015). The intrinsic reward is often proportional to the novelty of the visiting state: it is high if the state is novel (e.g. di erent from the past ones (Badia et al., 2020; 2019)) or less frequently visited (Bellemare et al., 2016; Tang et al., 2017). Another view of intrinsic motivation is from surprise, which refers to the result of the experience being unexpected, and is determined by the discrepancy between the expectation (from the gent's prediction) and observed reality (Barto et al., 2013; Schmidhuber, 2010). Technically, surprise is the di erence between prediction and observation representation vectors. The norm of the residual (i.e. prediction error) is used as the intrinsic reward. Here, we will use the terms surprise and surprise norm to refer to the residual vector and its norm, respectively. Recent works have estimated surprise with various predictive models such as dynamics (Stadie et al., 2015), episodic reachability (Savinov et al., 2018) and inverse dynamics (Pathak et al., 2017); and achieved signi cant improvements with surprise norm (Burda et al., 2018a). However, surprise-based agents tend to be overly curious about noisy or unpredictable observations (Itti and Baldi, 2005; Schmidhuber, 1991). For example, consider an agent watching a television screen showing white noise (noisy-TV problem). The TV is boring, yet the agent cannot predict the screen's content and will be attracted to the TV due to its high surprise norm. This distraction or "fake surprise" is common in partially observable Markov Decision Process (POMDP), including navigation tasks and Atari games (Burda et al., 2018b). Many works have addressed this issue by relying on the learning progress (Achiam and Sastry, 2017; Schmidhuber, 1991) or random network distillation (RND) (Burda et al., 2018b). However, the former is computationally expensive, and the latter requires many samples to perform well. This paper overcomes the "fake surprise" issue by using surprise novelty - a new concept that measures the uniqueness of surprise. To identify surprise novelty, the agent needs to compare the current surprise with surprises in past encounters. One way to do this is to equip the agent with some kind of associative memory, which we implement as an autoencoder whose task is to reconstruct a query surprise. The lower the reconstruction error, the lower the surprise novelty. A further mechanism is needed to deal with the rapid changes in surprise structure within an episode. As an example, if the agent meets the same surprise at two time steps, its surprise novelty should decline, and with a simple autoencoder this will not happen. To remedy this, we add an episodic memory, which stores intra-episode surprises. Given the current surprise, this memory can retrieve similar surprises presented earlier in the episode through an attention mechanism. These surprises act as a context added to the query to help the autoencoder better recognize whether the query surprise has been encountered in the episode or not. The error between the query and the autoencoder's output is de ned as surprise novelty, to which the intrinsic reward is set proportionally. We argue that using surprise novelty as an intrinsic reward is better than surprise norm. As in POMDPs, surprise norms can be very large since the agent cannot predict its environment perfectly, yet there may exist patterns of prediction failure. If the agent can remember these patterns, it will not feel surprised when similar prediction errors appear regardless of the surprise norms. An important emergent property of this architecture is that when random observations are presented (e.g., white noise in the noisy-TV problem), the autoencoder can act as an identity transformation operator, thus e ectively passing the noise through to reconstruct it with low error. We conjecture that the autoencoder is able to do this with the surprise rather than the observation as the surprise space has lower variance, and we show this in our paper. To make our memory system work on the surprise level, we adopt an intrinsic motivation method to generate surprise for the memory. The surprise generator (SG) can be of any kind based on predictive models and is jointly trained with the memory to optimize its own loss function. To train the surprise memory (SM), we optimize the memory's parameters to minimize the reconstruction error. Our contribution is to propose a new concept of surprise novelty for intrinsic motivation. We argue that it re ects better the environment originality than surprise norm (see motivating graphics Fig. 1). In our experiments, the SM helps RND (Burda et al., 2018b) perform well in our challenging noisy-TV problem while RND alone performs poorly. Not only with RND, we consistently demonstrate signi cant performance gain when coupling three di erent SGs with our SM in sparse-reward tasks. Finally, in hard exploration Atari games, we boost the scores of 2 strong SGs, resulting in better performance under the low-sample regime. 2 Methods 2.1 Surprise Novelty Surprise is the di erence between expectation and observation (Ekman and Davidson, 1994). If a surprise repeats, it is no longer a surprise. Based on this intuition, we hypothesize that surprises can be characterized by their novelties, and an agent's curiosity is driven by the surprise novelty rather than the surprising magnitude. Moreover, surprise novelty should be robust against noises: it is small even for random observations. For example, watching a random-channel TV can always be full of surprises as we cannot expect which channel will appear next. However, the agent should soon nd it boring since the surprise of random noises reoccurs repeatedly, and the channels are entirely unpredictable. We propose using a memory-augmented neural network (MANN) to measure surprise novelty. The memory remembers past surprise patterns, and if a surprise can be retrieved from the memory, it is not novel, and the intrinsic motivation should be small. The memory can also be viewed as a reconstruction network. The network can pass its inputs through for random, pattern-free surprises, making them retrievable. Surprise novelty has an interesting property: if some event is unsurprising (the expectation-reality residual is −→ 0 ), its surprise ( −→ 0 with norm 0) is always perfectly retrievable (surprise novelty is 0). In other words, low surprise norm means low surprise novelty. On the contrary, high surprise norm can have little surprise novelty as long as the surprise can be retrieved from the memory either through associative recall or pass-through mechanism. Another property is that the variance of surprise is generally lower than that of observation (state), potentially making the learning on surprise space easier. This property is formally stated as follows. Proposition 1. Let X and U be random variables representing the observation and surprise at the same timestep, respectively. Under an imperfect SG, the following inequality holds: ∀i : ( σXi )2 ≥ (σUi )2 where ( σXi )2 and ( σUi )2 denote the i-th diagonal elements of var(X) and var(U), respectively. Proof. See Appendix E. 2.2 Surprise Generator Since our MANN requires surprises for its operation, it is built upon a prediction model, which will be referred to as Surprise Generators (SG). In this paper, we adopt many wellknown SGs (e.g. RND (Burda et al., 2018b) and ICM (Pathak et al., 2017)) to predict the observation, compute the surprise ut and its norm for every step in the environment. The surprise norm is the Euclidean distance between the expectation and the actual observation: ‖ut‖ = ‖SG (It)−Ot‖ (1) where ut ∈ Rn is the surprise vector of size n, It the input of the SG at step t of the episode, SG (It) and Ot the SG's prediction and the observation target, respectively. The input It is speci c to the SG architecture choice, which can be the current (st) or previous state, action (st−1, at). The observation target Ot is usually a transformation (can be identical or random) of the current state st, which serves as the target for the SG's prediction. The SG is usually trained to minimize: LSG = Et [‖ut‖] (2) Here, predictable observations have minor prediction errors or little surprise. One issue is that a great surprise norm can be simply due to noisy or distractive observations. Next, we propose a remedy for this problem. 2.3 Surprise Memory The surprise generated by the SG is stored and processed by a memory network dubbed Surprise Memory (SM). It consists of an episodic memoryM and an autoencoder network W, jointly optimized to reconstruct any surprise. At each timestep, the SM receives a surprise ut from the SG module and reads content u e t from the memoryM. {uet , ut} forms a surprise query qt to W to retrieve the reconstructed q̃t. This reconstruction will be used to estimate the novelty of surprises forming intrinsic rewards rit. Fig. 2 summarizes the operations of the components of our proposed method. Our 2 memory design e ectively recovers surprise novelty by handling intra and inter-episode surprise patterns thanks toM andW, respectively. M can quickly adapt and recall surprises that occur within an episode. W is slower and focuses more on consistent surprise patterns across episodes during training. Here the query qt can be directly set to the surprise ut. However, this ignores the rapid change in surprise within an episode. Without M, when the SG and W are xed (during interaction with environments), their outputs ut and q̃t stay the same for the same input It. Hence, the intrinsic reward rit also stays the same. It is undesirable since when the agent observes the same input at di erent timesteps (e.g., I1 = I2), we expect its curiosity should decrease in the second visit (ri1 <r i 2). Therefore, we design SM withM to x this issue. The episodic memory M stores representations of surprises that the agent encounters during an episode. For simplicity,M is implemented as a rst-in- rst-out queue whose size is xed as N . Notably, the content of M is wiped out at the end of each episode. Its information is limited to a single episode. M can be viewed as a matrix: M ∈ RN×d, where d is the size of the memory slot. We denote M (j) as the j-th row in the memory, corresponding to the surprise ut−j . To retrieve fromM a read-out uet that is close to ut, we perform content-based attention (Graves et al., 2014) to compute the attention weight as wt (j) = (utQ)M(j)> ‖(utQ)‖‖M(j)‖ . The read-out fromM is then u e t = wtMV ∈ Rn. Here, Q ∈ Rn×d and V ∈ Rd×n are learnable weights mapping between the surprise and the memory space. To force the read-out close to ut, we minimize: LM = Et [‖uet − ut‖] (3) The read-out and the SG's surprise form the query surprise to W: qt = [uet , ut] ∈ R2n. M stores intra-episode surprises to assist the autoencoder in preventing the agent from exploring fake surprise within the episode. Since we train the parameters to reconstruct ut using past surprises in the episode, if the agent visits a state whose surprise is predictable from those in M, ‖uet − ut‖ should be small. Hence, the read-out context uet contains no extra information than ut and reconstructing qt fromW becomes easier as it is equivalent to reconstructing ut. In contrast, visiting diverse states leads to a more novel read-out u e t and makes it more challenging to reconstruct qt, generally leading to higher intrinsic reward. The autoencoder network W can be viewed as an associative memory of surprises that persist across episodes. At timestep t in any episode during training, W is queried with qt to produce a reconstructed memory q̃t. The surprise novelty is then determined as: rit = ‖q̃t − qt‖ (4) which is the norm of the surprise residual q̃t − qt. It will be normalized and added to the external reward as an intrinsic reward bonus. The details of computing and using normalized intrinsic rewards can be found in Appendix C. We implementW as a feed-forward neural network that learns to reconstruct its own inputs. This kind of autoencoder has been shown to be equivalent to an associative memory that supports memory encoding and retrieval through attractor dynamics (Radhakrishnan et al., 2020). The query surprise is encoded to the weights of the network via backpropagation as we minimize the reconstruction loss below: LW = Et [ rit ] = Et [‖W (qt)− qt‖] (5) Here, q̃t = W (qt). Intuitively, it is easier to retrieve non-novel surprises experienced many times in past episodes. Thus, the intrinsic reward is lower for states that leads to these familiar surprises. On the contrary, rare surprises are harder to retrieve, which results in high reconstruction errors and intrinsic rewards. W is like a long-term inter-episode associative memory. Unlike slot-based memories, it has a xed memory capacity, can compress information and learn data representations. We could store the surprise in a slot-based memory across episodes, but the size of this memory would be autonomous, and the data would be stored redundantly. Hence, the quality of the stored surprise will reduce as more and more observations come in. Readers can refer to Appendix A to see the architecture details and how W can be interpreted as implementing associative memory. The whole system SG+SM is trained end-to-end by minimizing the following loss: L = LSG +LM +LW . Here, we block the gradients from LW backpropagated to the parameters of SG to avoid trivial reconstructions of qt. Pseudocode of our algorithm is presented in Appendix B. 3 Experimental Results 3.1 Noisy-TV: Robustness against Noisy Observations We use Noisy-TV, an environment designed to fool exploration methods (Burda et al., 2018b; Savinov et al., 2018), to con rm that our method can generate intrinsic rewards that (1) are more robust to noises and (2) can discriminate rare and common observations through surprise novelty. We simulate this problem by employing a 3D maze environment with a random map structure. The TV is not xed in speci c locations in the maze to make it more challenging. Instead, the agent brings the TV with it and can choose to watch TV anytime. Hence, there are three basic actions (turn left, right, and move forward) plus an action: watch TV. When taking this action, the agent will see a white noise image sampled from standard normal distribution and thus, the number of TV channels can be considered in nity. The agent's state is an image of its viewport, and its goal is to search for a red box randomly placed in the maze (+1 reward if the agent reaches the goal). The baseline is RND (Burda et al., 2018b), a simple yet strong SG that is claimed to obviate the stochastic problems of Noisy-TV. Our SG+SM model uses RND as the SG, so we name it RND+SM. Since our model and the baseline share the same RND architecture, the di erence in performance must be attributed to our SM. Fig. 3 (a) illustrates the mean-normalized intrinsic rewards (MNIR)1 measured at di erent states in our Noisy-TV environment. The rst two states are noises, the following three states are common walls, and the last two are ones where the agent sees the box. The 1See Appendix C for more information on this metric. MNIR bars show that both models are attracted mainly by the noisy TV, resulting in the highest MNIRs. However, our model with SM su ers less from noisy TV distractions since its MNIR is lower than RND's. We speculate that SM is able to partially reconstruct the whitenoise surprise via pass-through mechanism, making the normalized surprise novelty generally smaller than the normalized surprise norm in this case. That mechanism is enhanced in SM with surprise reconstruction (see Appendix D.1 for explanation). On the other hand, when observing red box, RND+SM shows higher MNIR than RND. The di erence between MNIR for common and rare states is also more prominent in RND+SM than in RND because RND prediction is not perfect even for common observations, creating relatively signi cant surprise norms for seeing walls. The SM xes that issue by remembering surprise patterns and successfully retrieving them, producing much smaller surprise novelty compared to those of rare events like seeing red box. Consequently, the agent with SM outperforms the other by a massive margin in task rewards (Fig. 3 (b)). As we visualize the number of watching TV actions and the value of the intrinsic reward by RND+SM and RND over training time, we realize that RND+SM helps the agent take fewer watching actions and thus, collect smaller amounts of intrinsic rewards compared to RND. We also verify that our proposed method outperforms a simpli ed version of SM using counts to measure surprise novelty and a vanilla baseline that does not use intrinsic motivation. The details of these results are given in Appendix D.1. 3.2 MiniGrid: Compatibility with Different Surprise Generators We show the versatility of our framework SG+SM by applying SM to 4 SG backbones: RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and autoencoderAE (see Appendix D.2 for implementation details). We test the models on three tasks from MiniGrid environments: Key-Door (KD), Dynamic-Obstacles (DO) and Lava-Crossing (LC) (Chevalier-Boisvert et al., 2018). If the agent reaches the goal in the tasks, it receives a +1 reward. Otherwise, it can be punished with negative rewards if it collides with obstacles or takes too much time to nish the task. These environments are not stochastic as the Noisy-TV but they still contain other types of distraction. For example, in KD, the agent can be attracted to irrelevant actions such as going around to drop and pick the key. In DO, instead of going to the destination, the agent may chase obstacle balls ying around the map. In LC the agent can commit unsafe actions like going near lava areas, which are di erent from typical paths. In any case, due to reward sparsity, intrinsic motivation is bene cial. However, surprise alone may not be enough to guide an e cient exploration since the observation can be too complicated for SG to minimize its prediction error. Thus, the agent quickly feels surprised, even in unimportant states. Table 1 shows the average returns of the models for three tasks. The Baseline is the PPO backbone trained without intrinsic reward. RND, ICM, NGU and AE are SGs providing the PPO with surprise-norm rewards while our method SG+SM uses surprise-novelty rewards. The results demonstrate that models with SM often outperform SG signi cantly and always contain the best performers. Notably, in the LC task, SGs hinder the performance of the Baseline because the agents are attracted to dangerous vivid states, which are hard to predict but cause the agent's death. The SM models avoid this issue and outperform the Baseline for the case of ICM+SM. Compared to AE, which computes intrinsic reward based on the novelty of the state, AE+SM shows a much higher average score in all tasks. That manifests the importance of modeling the novelty of surprise instead of states. To analyze the di erence between the SG+SM and SG's MNIR structure, we visualize the MNIR for each cell in the map of Key-Door in Appendix's Figs. 5 (b) and (c). We create a synthetic trajectory that scans through all the cells in the big room on the left and, at each cell, uses RND+SM and RND models to compute the corresponding surprise-norm and surprise-novelty MNIRs, respectively. As shown in Fig. 5 (b), RND+SM selectively identi es truly surprising events, where only a few cells have high surprise-novelty MNIR. Here, we can visually detect three important events that receive the most MNIR: seeing the key (bottom row), seeing the door side (in the middle of the rightmost column) and approaching the front of the door (the second and fourth rows). Other less important cells are assigned very low MNIR. On the contrary, RND often gives high surprise-norm MNIR to cells around important ones, which creates a noisy MNIR map as in Fig. 5 (c). As a result, RND's performance is better than Baseline, yet far from that of RND+SM. Another analysis of how surprise novelty discriminates against surprises with similar norms is given in Appendix's Fig. 8. 3.3 Atari: Sample-efficient Benchmark We adopt the sample-e ciency Atari benchmark (Kim et al., 2019) on six hard exploration games where the training budget is only 50 million frames. We use our SM to augment 2 SGs: RND (Burda et al., 2018b) and LWM (Ermolov and Sebe, 2020). Unlike RND, LWM uses a recurrent world model and forward dynamics to generate surprises. Details of the SGs, training and evaluation are in Appendix D.3. We run the SG and SG+SM in the same codebase and setting. Table 2 reports our and representative results from prior works, showing SM-augmented models outperform their SG counterparts in all games (same codebase). In Frostbite and Montezuma Revenge, RND+SM's score is almost twice as many as that of RND. For LWM+SM, games such as Gravitar and Venture observe more than 40% improvement. Overall, LWM+SM and RND+SM achieve the best mean and median human normalized score, improving 16% and 22% w.r.t the best SGs, respectively. Notably, RND+SM shows signi cant improvement for the notorious Montezuma Revenge. We also verify the bene t of the SM in the long run for Montezuma Revenge and Frostbite. As shown in Fig. 4 (a,b), RND+SM still signi cantly outperforms RND after 200 million training frames, achieving average scores of 10,000 and 9,000, respectively. The result demonstrates the scalability of our proposed method. When using RND and RND+SM to compute the average MNIR in several rooms in Montezuma Revenge (Fig. 1), we realize that SM makes MNIR higher for surprising events in rooms with complex structures while depressing the MNIR of fake surprises in dark rooms. Here, even in the dark room, the movement of agents (human or spider) is hard to predict, leading to a high average MNIR. On the contrary, the average MNIR of surprise novelty is reduced if the prediction error can be recalled from the memory. Finally, measuring the running time of the models, we notice little computing overhead caused by our SM. On our Nvidia A100 GPUs, LWM and LWM+SM's average time for one 50M training are 11h 38m and 12h 10m, respectively. For one 200M training, RND and RND+SM's average times are 26h 24m and 28h 1m, respectively. These correspond to only 7% more training time while the performance gap is signi cant (4000 scores). 3.4 Ablation Study Role of Memories Here, we use Minigrid's Dynamic-Obstacle task to study the role of M and W in the SM (built upon RND as the SG). Disabling W, we directly use ‖qt‖ = ‖[uet , ut]‖ as the intrinsic reward, and name this version: SM (no W). To ablate the e ect ofM, we remove uet from qt and only use qt = ut as the query to W, forming the version: SM (no M). We also consider di erent episodic memory capacity and slot size N -d= {32− 4, 128− 16, 1024− 64}. As N and d increase, the short-term context expands and more past surprise information is considered in the attention. In theory, a bigM is helpful to capture long-term and more accurate context for constructing the surprise query. Fig. 4 (c) depicts the performance curves of the methods after 10 million training steps. SM (no W) and SM (noM) show weak signs of learning, con rming the necessity of both modules in this task. Increasing N -d from 32−4 to 1024−64 improves the nal performance. However, 1024− 64 is not signi cantly better than 128− 16, perhaps because it is unlikely to have similar surprises that are more than 128 steps apart. Thus, a larger attention span does not provide a bene t. As a result, we keep using N = 128 and d = 16 in all other experiments for faster computing. We also verify the necessity ofM and W in Montezuma Revenge and illustrate how M generates lower MNIR when 2 similar event occurs in the same episode in Key-Door (see Appendix D.4). No Task Reward In this experiment, we remove task rewards and merely evaluate the agent's ability to explore using intrinsic rewards. The task is to navigate 3D rooms and get a +1 reward for picking an object (Chevalier-Boisvert, 2018). The state is the agent's image view, and there is no noise. Without task rewards, it is crucial to maintain the agent's interest in unique events of seeing the objects. In this partially observable environment, surprise-prediction methods may struggle to explore even without noise due to lacking information for good predictions, leading to usually high prediction errors. For this testbed, we evaluate random exploration agent (Baseline), RND and RND+SM in 2 settings: 1 room with three objects (easy), and 4 rooms with one object (hard). To see the di erence among the models, we compare the cumulative task rewards over 100 million steps (see Appendix D.4 for details). RND is even worse than Baseline in the easy setting because predicting causes high biases (intrinsic rewards) towards the unpredictable, hindering exploration if the map is simple. In contrast, RND+SM uses surprise novelty, generally showing smaller intrinsic rewards (see Appendix Fig. 12 (right)). Consequently, our method consistently demonstrates signi cant improvements over other baselines (see Fig. 4 (d) for the hard setting). 4 Related works Intrinsic motivation approaches usually give the agent reward bonuses for visiting novel states to encourage exploration. The bonus is proportional to the mismatch between the predicted and reality, also known as surprise (Schmidhuber, 2010). One kind of predictive model is the dynamics model, wherein the surprise is the error of the models as predicting the next state given the current state and action (Achiam and Sastry, 2017; Stadie et al., 2015). One critical problem of these approaches is the unwanted bias towards transitions where the prediction target is a stochastic function of the inputs, commonly found in partially observable environments. Recent works focus on improving the features of the predictor's input by adopting representation learning mechanisms such as inverse dynamics (Pathak et al., 2017), variational autoencoder, random/pixel features (Burda et al., 2018a), or whitening transform (Ermolov and Sebe, 2020). Although better representations may improve the reward bonus, they cannot completely solve the problem of stochastic dynamics and thus, fail in extreme cases such as the noisy-TV problem (Burda et al., 2018b). Besides dynamics prediction, several works propose to predict other quantities as functions of the current state by using autoencoder (Nylend, 2017), episodic memory (Savinov et al., 2018), and random network (Burda et al., 2018b). Burda et al. (2018) claimed that using a deterministic random target network is bene cial in overcoming stochasticity issues. Other methods combine this idea with episodic memory and other techniques, achieving good results in large-scale experiments (Badia et al., 2020; 2019). From an information theory perspective, the notation of surprise can be linked to information gain or uncertainty, and predictive models can be treated as parameterized distributions (Achiam and Sastry, 2017; Houthooft et al., 2016; Still and Precup, 2012). Furthermore, to prevent the agent from unpredictable observations, the reward bonus can be measured by the progress of the model's prediction (Achiam and Sastry, 2017; Lopes et al., 2012; Schmidhuber, 1991). However, these methods are complicated and hard to scale, requiring heavy computing. A di erent angle to handle stochastic observations during exploration is surprsie minimization (Berseth et al., 2020; Rhinehart et al., 2021). In this direction, the agents get bigger rewards for seeing more familiar states. Such a strategy is somewhat opposite to our approach and suitable for unstable environments where the randomness occurs separately from the agents' actions. These earlier works rely on the principle of using surprise as an incentive for exploration and di er from our principle that utilizes surprise novelty. Also, our work augments these existing works with a surprise memory module and can be used as a generic plug-in improvement for surprise-based models. We note that our memory formulation di ers from the memorybased novelty concept using episodic memory (Badia et al., 2019), momentum memory (Fang et al., 2022), or counting (Bellemare et al., 2016; Tang et al., 2017) because our memory operates on the surprise level, not the state level. In our work, exploration is discouraged not only in frequently visited states but also in states whose surprises can be reconstructed using SM. Our work provides a more general and learnable novelty detection mechanism, which is more exible than the nearest neighbour search or counting lookup table. 5 Discussion This paper presents Surprise Generator-Surprise Memory (SG+SM) framework to compute surprise novelty as an intrinsic motivation for the reinforcement learning agent. Exploring with surprise novelty is bene cial when there are repeated patterns of surprises or random observations. For example, in the Noisy-TV problem, our SG+SM can harness the agent's tendency to visit noisy states such as watching random TV channels while encouraging it to explore rare events with distinctive surprises. We empirically show that our SM can supplement three surprise-based SGs to achieve more rewards in fewer training steps in three grid-world environments. In 3D navigation without external reward, our method signi cantly outperforms the baselines. On two strong SGs, our SM also achieve superior results in hard-exploration Atari games within 50 million training frames. Even in the long run, our method maintains a clear performance gap from the baselines, as shown in Montezuma Revenge and Frostbite. If we view surprise as the rst-order error between the observation and the predicted, surprise novelty the retrieval error between the surprise and the reconstructed memory, is essentially the second-order error. It would be interesting to investigate the notion of higher-order errors, study their theoretical properties, and utilize them for intrinsic motivation in our future work. A W as Associative Memory This section will connect the associative memory concept to neural networks trained with the reconstruction loss as in Eq. 5. We will show how the neural network (W) stores and retrieves its data. We will use 1-layer feed-forward neural network W to simplify the analysis, but the idea can extend to multi-layer feed-forward neural networks. For simplicity, assumingW is a square matrix, the objective is to minimize the di erence between the input and the output of W : For simplicity, assuming W is a square matrix, the objective is to minimize the di erence between the input and the output of W : L = ‖Wx− x‖22 (6) Using gradient descent, we update W as follow, W ←W − α ∂L ∂W ←W − 2α (Wx− x)x> ←W − 2αWxx> + 2αxx> ←W ( I − 2αxx> ) + 2αxx> where I is the identity matrix, x is the column vector. If a batch of inputs {xi}Bi=1 is used in computing the loss in Eq. 6, at step t, we update W as follows, Wt =Wt−1 (I − αXt) + αXt where Xt = 2 ∑B i=1 xix > i . From t = 0, after T updates, the weight becomes WT =W0 T∏ t=1 (I − αXt)− α2 T∑ t=2 XtXt−1 T∏ k=t+1 (I − αXk) + α T∑ t=1 Xt (7) Given the form of Xt, Xt is symmetric positive-de nite. Also, as α is often very small (0<α 1), we can show that ‖I − αXt‖ < 1 − λmin (αXt) < 1. This means as T →∞, ∥∥∥W0∏Tt=1 (I − αXt)∥∥∥→ 0 and thus, WT → α2∑Tt=2XtXt−1∏Tk=t+1 (I − αXk) + α ∑T t=1Xt independent from the initialization W0. Eq. 7 shows how the data (Xt) is integrated into the neural network weight Wt. The other components such as α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk) can be viewed as additional encoding noise. Without these components (by assuming α is small enough), WT ≈ α T∑ t=1 Xt = α T∑ t=1 B∑ i=1 xi,tx > i,t or equivalently, we have the Hebbian update rule: W ←W + xi,t ⊗ xi,t where W can be seen as the memory, ⊗ is the outer product and xi,t is the data or item stored in the memory. This memory update is the same as that of classical associative memory models such as Hop eld network and Correlation Matrix Memory (CMM) . Given a query q, we retrieve the value in W as output of the neural network: q′ = q>W = q>R+ α T∑ t=1 qXt = q>R+ 2α T∑ t=1 B∑ i=1 q>xi,tx > i,t where R = W0 ∏T t=1 (I − αXt) − α2 ∑T t=2XtXt−1 ∏T k=t+1 (I − αXk). If q is presented to the memory W in the past as some xj , q ′ can be represented as: q′ = q>R+ 2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t + 2αq > (qq>) = q>R︸︷︷︸+2α T∑ t=1 B∑ i=1,i6=j q>xi,tx > i,t︸ ︷︷ ︸+2α ‖q‖ q > noise cross talk Assuming that the noise is insigni cant thanks to small α, we can retrieve exactly q given that all items in the memory are orthogonal2. As a result, after scaling q′ with 1/2α, the retrieval error ( ∥∥∥ q′2α − q∥∥∥) is 0. If q is new to W , the error will depend on whether the items stored in W are close to q. Usually, the higher the error, the more novel q is w.r.t W . B SM's Implementation Detail In practice, the short-term memoryM is a tensor of shape [B,N, d] where B is the number of actors, N the memory length and d the slot size. B is the SG's hyperparameters and tuned depending on tasks based on SG performance. For example, for the Noisy-TV, we tune RND as the SG, obtaining B = 64 and directly using them for M. N and d are the special hyperparameters of our method. As mentioned in Sec. 3.4, we x N = 128 and d = 16 in all experiments. As B increases in large-scale experiments, memory storage for M can be demanding. To overcome this issue, we can use the uniform writing trick to optimally preserve information while reducing N (Le et al., 2019). Also, for W, by using a small hidden size, we can reduce the requirement for physical memory signi cantly. Practically, in all experiments, we implement W as a 2-layer feedforward neural network with a hidden size of 32 (2n → 32 → 2n). The activation is tanh. With n = 512 d = 16, the number of parameters of W is only about 65K. Also, Q ∈ Rn×d and V ∈ Rd×n have about 8K parameters. In total, our SM only introduces less than 90K trainable parameters, which are marginal to that of the SG and policy/value networks (up to 10 million parameters). The join training of SG+SM is presented in Algo. 2. We note that vector notations in the algorithm are row vectors. For simplicity, the algorithm assumes 1 actor. In practice, our algorithm works with multiple actors and mini-batch training. C Intrinsic Reward Normalization Following (Burda et al., 2018b), to make the intrinsic reward on a consistent scale, we normalized the intrinsic reward by dividing it by a running estimate of the standard deviations 2By certain transformation, this condition can be reduced to linear independence Algorithm 1 Intrinsic rewards computing via SG+SM framework. Require: ut, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Compute LSG = ‖ut‖ 2: QueryM with ut, retrieve uet = wtMV where wt is the attention weight 3: Compute LM = ‖uet − ut.detach()‖ 4: Query W with qt = [uet , ut], retrieve q̃t =W(qt) 5: Compute intrinsic reward rit = LW = ‖q̃t − qt.detach()‖ 6: return LSG, LM, LW Algorithm 2 Jointly training SG+SM and the policy. Require: bu er, policy πθ, surprise-based predictor SG, and our surprise memory SM consisting of a slot-based memoryM, parameters Q, V , and a neural network W 1: Initialize πθ, SG, Q, W 2: for iteration = 1, 2, ... do 3: for t = 1, 2, ...T do 4: Execute policy πθ to collect st, at, rt, forming input It = st, ... and target Ot 5: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 6: Compute intrinsic reward rit using Algo. 1 7: Compute nal reward rt ← rt + βrit/rstdt 8: Add (It, Ot, st−1, st, at, rt) to bu er 9: Add utQ toM 10: if done episode then clearM 11: end for 12: for k = 1, 2, ..,K do 13: Sample It, Ot from bu er 14: Compute surprise ut = SG (It)−Ot.detach() (Eq. 1) 15: Compute LSG, LM, LW using Algo. 1 16: Update SG, Q and W by minimizing the loss L = LSG + LM + LW 17: Update πθ with sample (st−1, st, at, rt) from bu er using backbone algorithms 18: end for 19: end for of the intrinsic returns. This normalized intrinsic reward (NIR) will be used for training. In addition, there is a hyperparameter named intrinsic reward coe cient to scale the intrinsic contribution relatively to the external reward. We denote the running mean's standard deviations and intrinsic reward coe cient as rstdt and β, respectively, in Algo. 2. In our experiments, if otherwise stated, β = 1. We note that when comparing the intrinsic reward at di erent states in the same episode (as in the experiment section), we normalize intrinsic rewards by subtracting the mean, followed by a division by the standard deviation of all intrinsic rewards in the episode. Hence, the mean-normalized intrinsic reward (MNIR) in these experiments is di erent from the one used in training and can be negative. We argue that normalizing with mean and std. of the episode's intrinsic rewards is necessary to make the comparison reasonable. For example, in an episode, method A assigns all steps with intrinsic rewards of 200; and method B assigns novel steps with intrinsic rewards of 1 while others 0. Clearly, method A treats all steps in the episode equal, and thus, it is equivalent to giving no motivation for all of the steps in the episode (the learned policy will not motivate the agent to visit novel states). On the contrary, method B triggers motivation for novel steps in the episodes (the learned policy will encourage visits to novel states). Without normalizing by mean subtraction, it is tempting to conclude that the relative intrinsic reward of method A for a novel step is higher, which is technically incorrect. D Experimental Details D.1 Noisy-TV We create the Noisy-TV environment by modifying the Maze environment (MazeS3Fast-v0) in the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The backbone RL algorithm is PPO. We adopt a public code repository for the implementation of PPO and RND (MIT License)3. In this environment, the state is an image of the agent's viewport. The details of architecture and hyperparameters of the backbone and RND is presented in Table 4. Most of the setting is the same as in the repository. We only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) to suit our hardware and the task. After tuning with RND, we use the same setting for our RND+SM. Fig. 6 reports all results for this environment. Fig. 6 (a) compares the nal intrinsic reward (IR) generated by RND and RND+SM over training time. Overall, RND's IR is always higher than RND+SM's, indicating that our method is signi cantly reduces the attention of the agent to the noisy TV by assigning less IR to watching TV. Fig. 6 (b) compares the number of noisy actions between two methods where RND+SM consistently shows fewer watching TV actions. That con rms RND+SM agent is less distracted by the TV. As mentioned in the main text, RND+SM is better at handling noise than RND. Note that RND aims to predict the transformed states by minimizing ‖SG (st)− fR(st)‖ where fR is a xed neural network initialized randomly. If RND can learns the transformation, it can passthrough the state, which is similar to reconstruction in an autoencoder. However, learning fR can be harder and require more samples than learning an identity transformation since fR is non-linear and complicated. Hence, it may be more challenging for RND to pass-through the noise than SM. Another possible reason lies in the operating space (state vs. surprise). If we treat white noise as a random variable X, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ] − X where Y is a random factor that a ects the training of the surprise generator. The factor Y makes the SG produce imperfect reconstruction E [X|Y ]4. Here, SG and SM learn to reconstruct X and U , respectively. We can prove that the variance of each feature dimension in U is smaller than that of X (see Sec. E). Learning an autoencoder on surprise space is more bene cial than in state space since the data has less variance and thus, it may require less data points to learn the data distribution. Fig. 6 (c) reports performance of all baselines. Besides RND and RND+SM, we also include PPO without intrinsic reward as the vanilla Baseline for reference. In addition, we investigate a simple implementation of SM using count-based method to measure surprise novelty. Concretely, we use SimHash algorithm to count the number of surprise c(ut) in a similar manner as (Bellemare et al., 2016) and name the baseline RND+SM (count). The 3https://github.com/jcwleo/random-network-distillation-pytorch 4In this case, the perfect reconstruction is E [X] intrinsic reward is then β/ √ c(ut). We tune the hyperparameter β = {0.5, 1, 5} and the hash matrix size kh = {32, 64, 128, 256} and use the same normalization and training process to run this baseline. We report the learning curves of the best variant with β = 0.5 and kh = 128. The result demonstrates that the proposed SM using memory-augmented neural networks outperforms the count-based SM by a signi cant margin. One possible reason is that count-based method cannot handle white noise: it always returns high intrinsic rewards. In contrast, our SM can somehow reconstruct white noise via pass-through mechanism and thus reduces the impact of fake surprise on learning. Also, the proposed SM is more exible than the count-based counterpart since it learns to reconstruct from the data rather than using a x counting scheme. The result also shows that RND+SM outperforms the vanilla Baseline. Although the improvement is moderate (0.9 vs 0.85), the result is remarkable since the Noisy-TV is designed to fool intrinsic motivation methods and among all, only RND+SM can outperform the vanilla Baseline. D.2 MiniGrid The tasks in this experiment are from the MiniGrid library (Apache License) (ChevalierBoisvert et al., 2018). In MiniGrid environments, the state is a description vector representing partial observation information such as the location of the agents, objects, moving directions, etc. The three tasks use hardest maps: • DoorKey: MiniGrid-DoorKey-16x16-v0 • LavaCrossing: MiniGrid-LavaCrossingS11N5-v0 • DynamicObstacles: MiniGrid-Dynamic-Obstacles-16x16-v0 The SGs used in this experiment are RND (Burda et al., 2018b), ICM (Pathak et al., 2017), NGU (Badia et al., 2019) and AE. Below we describe the input-output structure of these SGs. • RND: It = st and Ot = fR (st) where st is the current state and fR is a neural network that has a similar structure as the prediction network, yet its parameters are initialized randomly and xed during training. • ICM: It = (st−1, at) and Ot = st where s is the embedding of the state and a the action. We note that in addition to the surprise loss (Eq. 2), ICM is trained with inverse dynamics loss. • NGU: This agent reuses the RND as the SG (It = st and Ot = fR (st)) and combines the surprise norm with an KNN episodic reward. When applying our SM to NGU, we only take the surprise-based reward as input to the SM. The code for NGU is based on this public repository https://github.com/opendilab/DI-engine. • AE: It = st and Ot = st where s is the embedding of the state. This SG can be viewed as an associative memory of the observation, aiming to remember the states. This baseline is designed to verify the importance of surprise modeling. Despite sharing a similar architecture, it di ers from our SM, which operates on surprise and have an augmented episodic memory to support reconstruction. The backbone RL algorithm is PPO. The code for PPO and RND is the same as in Sec. D.1. We adopt a public code repository for the implementation of ICM (MIT License)5. We implement AE ourselves using a 3-layer feed-forward neural network. For the SGs, we only tune the number of actors (32, 128, 1024), mini-batch size (4, 16, 64) and -clip (0.1, 0.2, 0.3) for the DoorKey task. We also tune the architecture of the AE (number of layers: 1,2 or 3, activation tanh or ReLU) on the DoorKey task. After tuning the SGs, we use the same setting for our SG+SM. The detailed con gurations of the SGs for this experiment are reported in Table 3 and Table 4. The full learning curves of the backbone (Baseline), SG and SG+SM are given in Fig. 7. To visualize the di erence between surprise and surprise residual vectors, we map these in the synthetic trajectory to 2-dimensional space using t-SNE projection in Fig. 8. The surprise points show clustered patterns for high-MNIR states, which con rms our hypothesis that there exist familiar surprises (they are highly surprising due to high norm, yet repeated). In contrast, the surprise residual estimated by the SM has no high-MNIR clusters. The SM transforms clustered surprises to scatter surprise residuals, resulting in a broader range of MNIR, thus showing signi cant discrimination on states that have similar surprise norm. D.3 Atari The Atari 2600 Games task involves training an agent to achieve high game scores. The state is a 2d image representing the screen of the game. 5https://github.com/jcwleo/curiosity-driven-exploration-pytorch Surprise Surprise Residual SG and RL backbone implementations We use 2 SGs: RND and LWM. RND uses a PPO backbone as in previous sections. On the other hand, LWM uses DQN backbone with CNN-based encoder and GRU-based value function. The LWM SG uses GRU to model forward dynamics of the environment and thus its input is: It = (st−1, at, ht−1) where st−1 is the embedding of the previous state, at the current action, and ht−1 the hidden state of the world model GRU. The target Ot is the embedding of the current state st. RND follows the same implementation as in previous experiments. We use the public code of LWM provided by the authors6 to implement LWM. The hyperparameters of RND and LWM are tuned by the repository's owner (see Table 4 for RND and refer to the code or the original paper (Ermolov and Sebe, 2020) for the details of LWM implementation). We augment them with our SM of default hyperparameters N = 128, d = 16. Training and evaluation We follow the standard training for Atari games, such as stacking four frames and enabling sticky actions. All the environments are based on OpenAI's gym-atari's NoFrameskip-v4 variants (MIT Liscence)7 . After training, we evaluate the models by measuring the average return over 128 episodes and report the results in Table. 2. Depending on the setting, the models are trained for 50 or 200 million frames. Results Fig. 9 demonstrates the learning curves of all models in 6 Atari games under the low-sample regime. LWM+SM and RND+SM clearly outperfrom LWM and RND in Frostbite, Venture, Gravitar, Solaris and Frostbite, Venture, Gravitar and MontezumaRevenge, respectively. Table 5 reports the results of more baselines. D.4 Ablation study Role of Memories We conduct more ablation studies to verify the need for the shortM and long-term (W) memory in our SM. We design additional baselines SM (no W) and SM (no M) (see Sec. 3.4), and compare them with the SM with full features in Montezuma Revenge and Frostbite task. Fig. 10 (a) shows that only SM (full) can reach an average score of more than 5000 after 50 million training frames. Other ablated baselines can only achieve around 2000 scores. 6https://github.com/htdt/lwm 7https://github.com/openai/gym We also shows the impact of the episodic memory in decreasing the intrinsic rewards for similar states as discussed in Sec. 2.3. We select 3 states in the MiniGrid's KeyDoor task and computes the MNIR for each state, visualized in Fig. 11. At the step-1 state, the MNIR is low since there is nothing special in the view of the agent. At the step-15 state, the agent rst sees the key, and get a high MNIR. At the step-28 state, the agent drops the key and sees the key again. This event is still more interesting than the step-1 state. However, the view is similar to the one in step 15, and thus, the MNIR decreases from 0.7 to 0.35 as expected. No Task Reward The tasks in this experiment are from the MiniWorld library (Apache License) (Chevalier-Boisvert, 2018). The two tasks are: • Easy: MiniWorld-PickupObjs-v0 • Hard: MiniWorld-FourRooms-v0 The backbone and SG are the same as in Sec. D.1. We remove the task/external reward in this experiment. For the Baseline, without task reward, it receives no training signal and thus, showing a similar behavior as a random agent. Fig. 12 illustrates the running average of cumulative task return and the intrinsic reward over training steps. In the Easy mode, the random Baseline can even perform better than RND, which indicates that biased intrinsic reward is not always helpful. RND+SM, in both modes, shows superior performance, con rming that its intrinsic reward is better to guide the exploration than that of RND. E Theoretical Property of Surprise Space's Variance Let X be a random variable representing the observation at some timestep, a surprise generator (SG) can at most learn to predict the mean of this variable and compute the surprise U = E [X|Y ]−X where Y is a random factor that a ect the prediction of SG and makes it produce imperfect reconstruction E [X|Y ] instead of E [X]. For instance, in the case of an autoencoder AE as the SG, X and U are stand AE(st)− st, respectively. Let us denote Z = E (X|Y ), then E [Z|Y ] = Z and E [ Z2|Y ] = Z2. We have var (X) = var(X − Z + Z) = var(X − Z) + var(Z) + 2cov(X − Z,Z) = var(X − Z) + var(Z) + 2E[(X − Z)Z]− 2E[X − Z]E[Z] Using the Law of Iterated Expectations, we have E[X − Z] = E[E[X − Z|Y ]] = E[E [X|Y ]− E [Z|Y ]] = E [Z − Z] = 0 and E[(X − Z)Z] = E[E[(X − Z)Z|Y ]] = E[E[XZ − Z2|Y ]] = E[E (XZ|Y )− E ( Z2|Y ) ] = E[ZE (X|Y )− Z2] = E[Z2 − Z2] = 0 Therefore, var (X) = var(X − Z) + var(Z) Let CXii , C X−Z ii and C Z ii denote the diagonal entries of these covariance matrices, they are the variances of the components of the random vector X, X −Z and Z, respectively. That is, ( σXi )2 = ( σX−Zi )2 + ( σZi )2 ⇒ ( σXi )2 ≥ (σX−Zi )2 = (σUi )2 In our setting, X and U represents observation and surprise spaces, respectively. Therefore, the variance of each feature dimension in surprise space is smaller than that of observation space. The equality is obtained when ( σZi )2 = 0 or E (X|Y ) = E (X). That is, the SG's prediction is perfect, which is unlikely to happen in practice. F Limitations Our method assumes that surprises have patterns and can be remembered by our surprise memory. There might exist environments beyond those studied in this paper where this assumption may not hold, or surprise-based counterparts already achieve optimal exploration (e.g., perfect SG) and thus do not need SM for improvement (e.g., Freeway game). In addition, M and W require additional physical memory (RAM/GPU) than SG methods. Finally, a plug-in module like SM introduces more hyperparameters, such as N and d. Although we nd the default values of N=128 and d=16 work well across all experiments in this paper, we recommend adjustments if users apply our method to novel domains.
1. What is the focus and contribution of the paper regarding intrinsic motivation in reinforcement learning? 2. What are the strengths and weaknesses of the proposed Surprise Memory (SM) method? 3. How does the SM intrinsic reward evolve through training when the same objective is used for long periods? 4. Are there any limitations or potential drawbacks of the approach that the authors did not discuss? 5. How does the paper's content fare regarding clarity, quality, novelty, and reproducibility?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work introduces a new method for intrinsic motivation in RL called Surprise Memory (SM), that in similar fashion to surprise-based intrinsic motivation, provides an intrinsic reward to the agent when it finds a 'new' observation. SM differs in that it takes in account if this 'surprise' was already expected from previous memories and reduce the intrinsic reward if that is the case. Intuitively, the method consists of a memory module (of fixed size) with previous surprises and an autoencoder (AE) that tries to reconstruct a query with the current surprise and the closest matching surprise within memory. The worse the AE does, the larger the intrinsic reward. Strengths And Weaknesses I believe that this is a good work, the method is novel and non-trivial, authors provide abundant theoretical and experimental ground for their approach. The paper is also well presented and easy to follow. Weaknesses: The authors only present the strengths of this method, which seem to be many, but I could not see any discussion on limitations or main weaknesses of this approach. The future lines proposed at the end feel a little hand waving in my opinion, it doesn't provide much ground for future researchers to continue this work. The need of the AE, while it is clearly demonstrated in the ablation experiments, it is lacking an explanation of why in the theoretical side -besides that it works better-. Going through the work I could grasp some intuition, but would be good that the authors detail this further in section 2 since intuitively, one could think that the norm between the surprise of this state and the one in memory would already tell you if this is an "expected" surprise. Extra: I don't want to point this as a weakness but I would like to ask the authors about how does the SM intrinsic reward evolves through training when you are training for the same objective for long. What I mean is that, if you are training your agent to get a red box at the end of the episode, since the box is exotic it provides high intrinsic reward. However, as you train and train the agent the AE will get better to recover that red box surprise at the end, contrarily, the AE probably cannot learn so well to reconstruct the tv noise. My question is, does the intrinsic reward for that final goal get relatively lower to the one of the tv as training progresses? And if not, could Authors explain why it doesn't happen? Clarity, Quality, Novelty And Reproducibility The work is easy to follow and clearly written. It could be fixed following the suggestions I wrote above but it can be easily fixed. The only important thing is that authors didn't share the code neither could I find any mentions to it in the appendix. Would be a good thing to have for reproducibility. I couldn't find any flaws in the proofs at the appendix about their theoretical propositions. Also, they provide abundant detail on hyperparameters, pseudocode and experiment details there. The paper has abundant experiments, ablations, baselines and benchmarks and the results are quite promising. As far as I know the method itself is novel, I only know of [1] that also works with intrinsic motivation and memory. The method presented here is quite different from that one, but probably it is worth to add to the related work. [1]Fang, Zheng, Biao Zhao, and Guizhong Liu. "Image Augmentation Based Momentum Memory Intrinsic Reward for Sparse Reward Visual Scenes." arXiv preprint arXiv:2205.09448 (2022).
ICLR
Title Discriminative Particle Filter Reinforcement Learning for Complex Partial observations Abstract Deep reinforcement learning is successful in decision making for sophisticated games, such as Atari, Go, etc. However, real-world decision making often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for complex partial observations. DPFRL encodes a differentiable particle filter in the neural network policy for explicit reasoning with partial observations over time. The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making. We show that using the discriminative update instead of standard generative models results in significantly improved performance, especially for tasks with complex visual observations, because they circumvent the difficulty of modeling complex observations that are irrelevant to decision making. In addition, to extract features from the particle belief, we propose a new type of belief feature based on the moment generating function. DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark introduced in this paper. Further, DPFRL performs well for visual navigation with real-world data in the Habitat environment. The code is available online 1. 1 INTRODUCTION Deep Reinforcement Learning (DRL) has attracted significant interest, with applications ranging from game playing (Mnih et al., 2013; Silver et al., 2017) to robot control and visual navigation (Levine et al., 2016; Kahn et al., 2018; Savva et al., 2019). However, natural real-world environments remain challenging for current DRL methods (Arulkumaran et al., 2017), in part because they require (i) reasoning in a partially observable environment and (ii) reasoning with complex observations, such as visually rich images. Consider, for example, a robot, navigating in an indoor environment, with a camera for visual perception. To determine its own location and a traversable path, the robot must extract from image pixels relevant geometric features, which often coexist with irrelevant visual features, such as wall textures, shadows, etc. Further, the task is partially observable: a single image at the current time does not provide sufficient features for localization, and the robot must integrate information from the history of visual inputs received. The partially observable Markov decision process (POMDP) provides a principled general framework for decision making under partial observability. Solving POMDPs requires tracking a sufficient statistic of the action-observation history, e.g., the posterior distribution of the states, called the belief. Most POMDP reinforcement learning (RL) methods summarize the history into a vector using a recurrent neural network (RNN) (Hausknecht & Stone, 2015; Zhu et al., 2018). RNNs are model-free generic function approximators. Without appropriate structural priors, they need large amounts of training data to learn to track a complex belief well. Model-based DRL methods aim to reduce the sample complexity by learning a model together with a policy. In particular, to deal with partial observability, Igl et al. (2018) recently proposed DVRL, which learns a generative observation model embedded into the policy through a Bayes filter. Since 1https://github.com/Yusufma03/DPFRL the Bayes filter tracks the belief explicitly, DVRL performs much better than generic RNNs under partial observability. However, a Bayes filter normally assumes a generative observation model, that defines the probability p(o | ht) of receiving an observation o = ot given the latent state ht (Fig. 1b). Learning this model can be very challenging since it requires modeling all observation features, including features irrelevant for RL. When o is an image, p(o | ht) is a distribution over all possible images. This means, e.g., to navigate in a previously unseen environment, we need to learn the distribution of all possible environments with their visual appearance, lighting condition, etc. — a much harder task than learning to extract features relevant to navigation, e.g., the traversable space. We introduce the Discriminative Particle Filter Reinforcement Learning (DPFRL), a POMDP RL method that learns to explicitly track a belief over the latent state without a generative observation model, and make decisions based on features of the belief (Fig. 1a). DPFRL approximates the belief by a set of weighted learnable latent particles {(hit, wit)}Ki=1, and it tracks this particle belief by a nonparametric Bayes filter algorithm, an importance weighted particle filter, encoded as a differentiable computational graph in the neural network architecture. The importance weighted particle filter applies discriminative update to the belief with an observation-conditioned transition model and a discriminative state-observation compatibility function (serving as the importance weights), both of which are learnable neural networks trained end-to-end. By using these update functions instead of the transition and observation models of the standard particle filter, DPFRL sidesteps the difficulty of learning a generative observation model (Fig. 1b). The model is discriminative in the sense that the compatibility function, fobs(ot, ht), as shown in Fig. 1c, while playing an analogue role as p(ot | ht), is not required to directly represent a normalized distribution over observations; and through end-toend training it only needs to model observation features relevant for the RL task. Finally, to summarize the particle belief for the policy, we introduce novel learnable features based on Moment-Generating Functions (MGFs) (Bulmer, 1979). MGF features are computationally efficient and permutation invariant, and they can be directly optimized to provide useful higher-order moment information for learning a policy. MGF features could be also used as learned features of any empirical distribution in applications beyond RL. We evaluate DPFRL on a range of POMDP RL domains: a continuous control task from Igl et al. (2018), Flickering Atari Games (Hausknecht & Stone, 2015), Natural Flickering Atari Games, a new domain with more complex observations that we introduce, and the Habitat visual navigation domain using real-world data (Savva et al., 2019). DPFRL outperforms state-of-the-art POMDP RL methods in most cases. Results show that belief tracking with a particle filter is effective for handling partial observability, and the discriminative update and MGF-based belief features allow for complex observations. 2 RELATED WORK Real-world decision-making problems are often formulated as POMDPs. POMDPs are notoriously hard to solve; in the worst case, they are computationally intractable (Papadimitriou & Tsitsiklis, 1987). Approximate POMDP solvers have made dramatic progress in solving large-scale POMDPs (Kurniawati et al., 2008). Particle filters have been widely adopted as a belief tracker for POMDP solvers (Silver & Veness, 2010; Somani et al., 2013) having the flexibility to model complex and multi-modal distributions, unlike Gaussian and Kalman filters. However, predefined model and state representations are required for these methods (see e.g. Bai et al. (2015)). Given the advances in generative neural network models, various neural models have been proposed for belief tracking (Chung et al., 2015; Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018). DVRL (Igl et al., 2018) uses a Variational Sequential Monte-Carlo method (Naesseth et al., 2018), similar to the particle filter we use, for belief tracking in RL. This gives better belief tracking capabilities, but as we demonstrate in our experiments, generative modeling is not robust in complex observation spaces with high-dimensional irrelevant observation. More powerful generative models, e.g., DRAW (Gregor et al., 2015), could be considered to improve generative observation modeling; however, evaluating a complex generative model for each particle would significantly increase the computational cost and optimization difficulty. Learning a robust latent representation and avoiding reconstructing observations are of great interest for RL (Oord et al., 2018; Guo et al., 2018; Hung et al., 2018; Gregor et al., 2019; Gelada et al., 2019). Discriminative RNNs have also been widely used for belief approximation in partially observable domains (Bakker, 2002; Wierstra et al., 2007; Foerster et al., 2016). The latent representation is directly optimized for the policy p(a|ht) that skips observation modeling. For example, Hausknecht & Stone (2015) and Zhu et al. (2018) tackle partially observable Flickering Atari Games by extending DQN (Mnih et al., 2013) with an LSTM memory. Our experiments demonstrate that the additional structure for belief tracking provided by a particle filter can give improved performance in RL. Embedding algorithms into neural networks to allow end-to-end discriminative training has gained attention recently. For belief tracking, the idea has been used in the differentiable histogram filter (Jonschkowski & Brock, 2016), Kalman filter (Haarnoja et al., 2016) and particle filter (Karkus et al., 2018; Jonschkowski et al., 2018). Further, Karkus et al. (2017) combined a learnable histogram filter with the Value Iteration Network (Tamar et al., 2016) and introduced a learnable POMDP planner, QMDP-net. However, these methods require a predefined state representation and are limited to relatively small state spaces. Ma et al. (2019) integrated the particle filter with standard RNNs, e.g., the LSTM, and introduced PF-RNNs for sequence prediction. We build on the work of Ma et al. (2019) and demonstrate its advantages for RL with complex partial observations, and extend it with MGF features for improved decision making from particle beliefs. Note that our framework is not specific to PF-RNNs, and could be applied to other differentiable particle filters as well. 3 DISCRIMINATIVE PARTICLE FILTER REINFORCEMENT LEARNING We introduce DPFRL for reinforcement learning under partial and complex observations. The DPFRL architecture is shown in Fig. 2. It has two main components, a discriminatively trained particle filter that tracks a latent belief bt, and an actor network that learns a policy p(a | bt) given the belief bt. 3.1 PARTICLE FILTER FOR LATENT BELIEF TRACKING Latent State Representation. In POMDPs the semantics of states s is typically defined explicitly. State variables may correspond to the position of a robot, configuration of obstacles, etc. In DPFRL, we do not require explicit specification of the state variables, but implicitly represent the state as a vector h of latent variables, that is, the semantics of the state variables are learned instead of being pre-specified. We use a fully differentiable particle filter algorithm to maintain a belief over h. More specifically, we approximate the belief with a set of weighted latent particles bt ≈ {(hit, wit)}Ki=1, where {hit}Ki=1 are K latent states learned by policy-oriented training, and {wit}Ki=1 represents the corresponding weights. Each latent state hit stands for a hypothesis in the belief; the set of latent particles provide an approximate representation for the belief. Belief Update. In a basic particle filter, there are two key steps to update a particle belief {hit−1, wit−1}Ki=1 to a new particle belief {hit, wit}Ki=1 upon receiving an observation ot after executing action at. hit ∼ p(h | hit−1, at), (1) wit = ηp(ot | hit)wit−1, η = 1/ΣKi=1p(ot | hit)wit−1 (2) The first step, Eq. 1, takes the transition dynamics into account to update each particle. The second step, Eq. 2, takes the observation into account to reweigh the new particles. Our belief update has a similar structure as the standard particle filter, but we replace the transition model and the observation model with richer functions to make the update more suitable for learning a policy in a partially observable domain. Specifically, the update equations are as follows. hit ∼ ftrans(hit−1, at, ot), (3) wit = ηfobs(h i t, ot)w i t−1, η = 1/Σ K i=1fobs(h i t, ot)w i t−1, (4) {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1) (5) Below, we first explain the intuition behind the above updates and the roles of ftrans and fobs as compared to the standard transition and observation models. We then derive that above rules from an importance weighed particle filter in Sect. 3.2. Observation-conditioned transition update. Eq. 3 takes a form more general than that in Eq. 2: instead of using the transition dynamics p(h | hit−1, at) to evolve a particle, we use a more general observation-conditioned transition ftrans(h | hit−1, at, ot). Incorporating the observation allows alleviating the problem of sampling unlikely particles. In fact, if we take ftrans to be p(h | hit−1, at, ot), then this allows us to skip Eq. 2, and completely avoids sampling particles that are likely considering at only, but unlikely considering both at and ot. Of course, in RL we do not have access to p(h | hit−1, at, ot), and instead ftrans is learned. In our implementation, a network first extracts features from ot, they are fed to a gated function following the PF-GRU of Ma et al. (2019), which outputs the mean and variance of a normal distribution. Details are in the Appendix. Importance weighting via a compatibility function. Eq. 4 is a relaxed version of Eq. 2: instead of using the observation model p(ot | hit) to adjust the particle weights based on their compatibility with the observation, we use a general non-negative compatibility function fobs(hit, ot). If the compatibility function is required to satisfy the normalization constraint that ∑ o fobs(h, o) is a constant for all h, then it is equivalent to a conditional distribution of o given h. We do not require this, and thus the update loses the probabilistic interpretation in Eq. 2. However, eliminating the need for the normalization constraint allows the compatibility function to be efficiently trained, as we can avoid computing the normalization constant. In addition, since the observation has already been incorporated in Eq. 3, we actually expect that the weights need to be adjusted in a way different from the standard particle filter. In our implementation, fobs(hit, ot) is a neural network with a single fully connected layer that takes in a concatenation of hit and features extracted from ot. The output of the network is interpreted as the log of fobs; and for numerical stability we perform the weight updates of Eq. 4 in the log-space as well. Note that more complex network architectures could improve the capability of fobs, which we leave to future work. Soft-resampling. To avoid particle degeneracy, i.e., most of the particles having a near-zero weight, particle filters typically resample particles. We adopt the soft-resampling strategy of Karkus et al. (2018); Ma et al. (2019), that provides approximate gradients for the non-differentiable resampling step. Instead of sampling from pt(i) = wit, we sample particles {h′it }Ki=1 from a softened proposal distribution q(i) = αwit + (1 − α)1/K, where α is an trade-off parameter. The new weights are derived using importance sampling: w′it = wit αwit+(1−α)1/K . We can have the final particle belief as {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1). As a result, fobs can be optimized with global belief information and model shared useful features across multiple time steps. Another related concern is that the particle distribution may collapse to particles with the same latent state. This can be avoided by ensuring that the stochastic transition function ftrans has a non-zero variance, e.g., by adding a small constant to the learned variance. End-to-end training. In DPFRL the observation-conditioned transition function ftrans and the compatibility function fobs are learned. Instead of training for a modeling objective, they are trained end-to-end for the final RL objective, backpropagating gradients through the belief-conditional policy p(a | bt) and the update steps of the particle filter algorithm, Eq. 3-5. 3.2 CONNECTION TO IMPORTANCE WEIGHTED PARTICLE FILTER Our belief update can be motivated from the following importance weighted particle filter. Learning directly p(h′ | h, a, o) is generally difficult, but if we have a distribution q(h′ | h, a, o) that is easy to learn, then we can use importance sampling to update a particle belief. hit ∼ q(hit−1, at, ot), (6) wit = ηf(h i t, h i t−1, at, ot)w i t−1, η = 1/Σ K i=1f(h i t, h i t−1, at, ot)w i t−1 (7) where f = p/q is the importance weight. Consider the case that q(h′ | h, a, o) is the conditional distribution of a joint distribution q(h′, h, a, o) of the form p(h′ | h, a)q(o | h′). That is, p and q share the same transition dynamics p(h′ | h, a). Then the importance weight f is a function of h′ and o only, because f(h′, h, a, o) = p(h′ | h, a, o) q(h′ | h, a, o) = p(h′ | h, a)p(o | h′) p(h′ | h, a)q(o | h′) = p(o | h′) q(o | h′) . This simpler form is exactly the form that we used for fobs in our belief update. 3.3 DISCRIMINATIVE VS. GENERATIVE MODELING We expect the discriminative compatibility function to be more effective than a generative model for the following reasons. A generative model aims to approximate p(o | h) by learning a function that takes h as input and outputs a parameterized distribution over o. When o is, e.g., an image, this requires approximations, e.g., using pixel-wise Gaussians with learned mean and variance. This model is also agnostic to the RL task and considers all observation features equally, including features irrelevant for filtering and decision making. In contrast, fobs takes o and h as inputs, and estimates the compatibility of o and h for particle filtering directly. This function avoids forming a parametric distribution over o, and the function can be easier to learn. The same functional form is used for the energy-function of energy-based models (LeCun et al., 2006) and in contrastive predictive coding (Oord et al., 2018), with similar benefits. For example, fobs may learn unnormalized likelihoods that are only proportionate to p(o | h) up to a o-dependent value, because after the normalization in Eq. 4, they would give the same belief update as the normalized p(o | h). Further, because fobs is trained for the final RL objective instead of a modeling objective, it may learn a compatibility function that is useful for decision making, but that does not model all observation features and has no proper probabilistic interpretation. While the task-oriented training of discriminative models may improve policy performance for the reasons above, it cannot take advantage of an auxiliary learning signal like the reconstruction objective of a generative model. An interesting line of future work may combine generative models with a compatibility function to simultaneously benefit from both formulations. 3.4 BELIEF-CONDITIONAL ACTOR NETWORK Conditioning a policy directly onto a particle belief is non-trivial. To feed it to the networks, we need to summarize it into a single vector. We introduce a novel feature extraction method for empirical distributions based on MomentGenerating Functions (MGFs). The MGF of an n-dimensional random variable X is given by MX(v) = E[ev >X],v ∈ Rn. In statistics, MGF is an alternative specification of its probability distribution (Bulmer, 1979). Since particle belief bt is an empirical distribution, the moment generating function of bt can be denoted as Mbt(v) = ∑K i=1 w i te v>hit . A more detailed background on MGFs is in Appendix A.2. In DPFRL, we use the values of the MGF at m learned locations v1:m as the feature vector of the MGF. The j-th MGF feature is given by M jbt(v j). For a clean notation, we use M jt in place of M jbt(v j). We use [ h̄t,M 1:m t ] as features for belief bt, where h̄t = ∑K i=1 w i th i t is the mean particle. The mean particle h̄t, as the first-order moment, and m additional MGF features, give a summary of the belief characteristics. The number of MGF features, m, controls how much additional information we extract from the belief. We empirically study the influence of MGF features in ablation studies. Compared to Ma et al. (2019) that uses the mean as the belief estimate, MGF features provide additional features from the empirical distribution. Compared to DVRL (Igl et al., 2018) that treats the Monte-Carlo samples as a sequence and merges them by an RNN, MGF features are permutationinvariant, computationally efficient and easy to optimize, especially when the particle set is large. Given the features [ h̄t,M 1:m t ] for bt, we compute the policy p(a | bt) with a policy network π(bt). We trained with an actor-critic RL algorithm, A2C (Mnih et al., 2016), where a value network V (bt) is introduced to assist learning. We use small fully-connected networks for π(bt) and V (bt) that share the same input bt. 4 EXPERIMENTS We evaluate DPFRL in a range of POMDP RL domains with increasing belief tracking and observation modeling complexity. We first use benchmark domains from the literature, Mountain Hike, and 10 different Flickering Atari Games. We then introduce a new, more challenging domain, Natural Flickering Atari Games, that uses a random video stream as the background. Finally we apply DPFRL to a challenging visual navigation domain with RGB-D observations rendered from real-world data. We compare DPFRL with a GRU network, a state-of-the-art POMDP RL method, DVRL, and ablations of the DPFRL architecture. As a brief conclusion, we show that: 1) DPFRL significantly outperforms GRU in most cases because of its explicit structure for belief tracking; 2) DPFRL outperforms the state-of-the-art DVRL in most cases even with simple observations, and its benefit increases dramatically with more complex observations because of DPFRL’s discriminative update; 3) MGF features are more effective for summarizing the latent particle belief than alternatives. 4.1 EXPERIMENTAL SETUP We train DPFRL and baselines with the same A2C algorithm, and use a similar network architecture and hyperparameters as the original DVRL implementation. DPFRL and DVRL differ in the particle belief update structure, but they use the same latent particle size dim(h) and the same number of particles K as in the DVRL paper (dim(h) = 128 and K = 30 for Mountain Hike, dim(h) = 256 and K = 15 for Atari games and visual navigation). The effect of the number of particles is discussed in Sect. 4.5. We train all models for the same number of iterations using the RMSProp optimizer (Tieleman & Hinton, 2012). Learning rates and gradient clipping values are chosen based on a search in the BeamRider Atari game independently for each model. Further details are in the Appendix. We have not performed additional searches for the network architecture and other hyper-parameters, nor tried other RL algorithm, such as PPO (Schulman et al., 2017), which may all improve our results. All reported results are averages over 3 different random seeds. We plot rewards accumulated in an episode, same as DVRL (Igl et al., 2018). The curves are smoothed over time and averaged over parallel environment executions. 4.2 MOUNTAIN HIKE Mountain Hike was introduced by Igl et al. (2018) to demonstrate the benefit of belief tracking for POMDP RL. It is a continuous control problem where an agent navigates on a fixed 20× 20 map. In the original task, partial observability is introduced by disturbing the agent observation with an additive Gaussian noise. To illustrate the effect of observation complexity in natural environments, we concatenate the original observation vector with a random noise vector. The complexity of the optimal policy remains unchanged, but the relevant information is now coupled with irrelevant observation features. More specifically, the state space and action space in Mountain Hike are defined as S = A = R2, where st = [xt, yt] and at = [δxt, δyt]. Transitions of the agent are stochastic with an additive Gaussian noise: st+1 = st + at + a, where a ∼ N (0, 0.25). The observation space is O = R2+l, where l is a predefined constant and l = 0 corresponds to the original setting. Observations are ot = [ost , o n t ], where o s t = st + s, s ∼ N (0, 1), and ont ∈ Rl is sampled from a uniform distribution U(−10, 10). The reward for each step is given by rt = r(xt, yt) − 0.01||at|| where r(xt, yt) is shown in Fig. 3. Episodes end after 75 steps. We train models for different settings of the noise vector length l, from l = 0 to l = 100. Results are shown in Fig. 4. We observe that DPFRL learns faster than the DVRL and GRU in all cases, including the original setting l = 0. Importantly, as the noise vector length increases, the performance of DVRL and GRU degrades, while DPFRL is unaffected. This demonstrates the ability of DPFRL to track a latent belief without having to explicitly model complex observations. 4.3 ATARI GAMES WITH PARTIAL OBSERVABILITY Atari games are one of the most popular benchmark domains for RL methods (Mnih et al., 2013). Their partially observable variants, Flickering Atari Games, have been used to benchmark POMDP RL methods (Hausknecht & Stone, 2015; Zhu et al., 2018; Igl et al., 2018). Here image observations are single frames randomly replaced by a blank frame with a probability of 0.5. The flickering observations introduce a simple form of partial observability. Another variant, Natural Atari Games (Zhang et al., 2018), replaces the simple black background of the frames of an Atari game with a randomly sampled video stream. This modification brings the Atari domain one step closer to the visually rich real-world, in that the relevant information is now encoded in complex observations. As shown by Zhang et al. (2018), this poses a significant challenge for RL. We propose a new RL domain, Natural Flickering Atari Games, that involves both challenges: partial observability simulated by flickering frames, and complex observations simulated by random background videos. The background videos increase observation complexity without affecting the decision making complexity, making this a suitable domain for evaluating RL methods with complex observations. We sample the background video from the ILSVRC dataset (Russakovsky et al., 2015). Examples for the BeamRider game are shown in Fig. 5. Details are in Appendix B. We evaluate DPFRL for both Flickering Atari Games and Natural Flickering Atari Games. We use the same set of games as Igl et al. (2018). To ensure a fair comparison, we take the GRU and DVRL results from the paper for Flickering Atari Games, use the same training iterations as in Igl et al. (2018), and we use the official DVRL open source code to train for Natural Flickering Atari Games. Results are summarized in Table 1. We highlight the best performance in bold where the difference is statistically significant (p = 0.05). Detailed training curves are in Appendix E. We observe that DPFRL significantly outperforms GRU in almost all games, which indicates the importance of explicit belief tracking, and shows that DPFRL can learn a useful latent belief representation. Despite the simpler observations, DPFRL significantly outperforms DVRL and achieves state-of-the-art results on 5 out of 10 standard Flickering Atari Games (ChopperCommand, MsPacman, BeamRider, Bowling, Asteroids), and it performs comparably in 3 other games (Centipede, Frostbite, IceHockey). The strength of DFPRL shows even more clearly in the Natural Flickering Atari Games, where it significantly outperforms DVRL on 7 out of 10 games and performs similarly in the rest. In some games, e.g. in Pong, DPFRL performs similarly with and without videos in the background (15.65 vs. 15.40), while the DVRL performance degrades substantially (-19.78 vs. 18.17). These results show that while the architecture of DPFRL and DVRL are similar, the policy-oriented discriminative update of DPFRL is much more effective for handling complex observations, and the MGF features provide a more powerful summary of the particle belief for decision making. However, on some games, e.g. on ChopperCommand, even DPFRL performance drops significantly when adding background videos. This shows that irrelevant features can make a task much harder, even for a discriminative approach, as also observed by Zhang et al. (2018). 4.4 VISUAL NAVIGATION Figure 6: RGB-D Habitat Observations Table 2: Visual Navigation Results SPL Success Rate Reward DPFRL 0.79 0.88 12.82±5.82 DVRL 0.09 0.11 5.22±2.24 GRU 0.63 0.74 10.14±2.82 PPO(Savva et al., 2019) 0.70 0.80 — Visual navigation poses a great challenge for deep RL (Mirowski et al., 2016; Zhu et al., 2017; Lample & Chaplot, 2017). We evaluate DPFRL for visual navigation in the Habitat Environment (Savva et al., 2019), using the real-world Gibson dataset (Xia et al., 2018). In this domain, a robot needs to navigate to goals in previously unseen environments. In each time step, it receives a first-person RGB-D camera image and its distance and relative orientation to the goal. The main challenge lies in the partial and complex observations: first-person view images only provide partial information about the unknown environment; and the relevant information for navigation, traversability, is encoded in rich RGB-D observations along with many irrelevant features, e.g., the texture of the wall. We use the Gibson dataset with the training and validation split provided by the Habitat challenge. We train models with the same architecture as for the Atari games, except for the observation encoder that accounts for the different observation format. We evaluate models in unseen environments from the validation split and compute the same metrics as in the literature: SPL, success rate, and average rewards. Results are shown in Table 2. Further details and results are in Appendix B and E. DPFRL significantly outperforms both DVRL and GRU in this challenging domain. DVRL performs especially poorly, demonstrating the difficulty of learning a generative observation model in realistic, visually rich domains. DPFRL also outperforms the PPO baseline from Savva et al. (2019). We note that submissions to the recently organized Habitat Challenge 2019 (Savva et al., 2019), such as (Chaplot et al., 2019), have demonstrated better performance than the PPO baseline (while our results are not directly comparable because of the closed test set of the competition). However, these approaches rely on highly specialized structures, such as 2D mapping and 2D path planning, while we use the same generic network as for Atari games. Future work may further improve our results by adding a task-specific structure to DPFRL or training with PPO instead of A2C. 4.5 ABLATION STUDY We conduct an extensive ablation study on the Natural Flickering Atari Games to understand the influence of each DPFRL component. The results are presented in Table 3. The discriminative compatibility function is more effective than a generative observation function. DPFRL-generative replaces the discriminative compatibility function of DPFRL with a generative observation function, where grayscale image observations are modeled by pixel-wise Gaussian distributions with learned mean and variance. Unlike DVRL, DPFRL-generative only differs from DPFRL in the parameterization of the observation function, the rest of the architecture and training loss remains the same. In most cases, the performance for DPFRL-generative degrades significantly compared to DPFRL. These results are aligned with our earlier observations, and indicate that the compatibility function is capable of extracting the relevant information from complex observations without having to learn a more complex generative model. More particles perform better. DPFRL with 1 particle performs poorly on most of the tasks (DPFRLLP1). This indicates that a single latent state is insufficient to represent a complex latent distribution that is required for the task, and that more particles may improve performance. MGF features are useful. We compare DPFRL using MGF features with DPFRL-mean, that only uses the mean particle; and with DPFRL-GRUmerge, that uses a separate RNN to summarize the belief, similar to DVRL. Results show that DPFRL-mean does not work as well as the standard DPFRL, especially for tasks that may need complex belief tracking, e.g., Pong. This can be attributed to the more rich belief statistics provided by MGF features, and that they do not constrain the learned belief representation to be always meaningful when averaged. Comparing to DPFRL-GRUmerge shows that MGF features generally perform better. While an RNN may learn to extract useful features from the latent belief, optimizing the RNN parameters is harder, because they are not permutation invariant to the set of particles and they result in a long backpropagation chain. 5 CONCLUSION We have introduced DPFRL, a framework for POMDP RL in natural environments. DPFRL combines the strength of Bayesian filtering and end-to-end RL: it performs explicit belief tracking with learnable particle filters optimized directly for the RL policy. DPFRL achieved state-of-the-art results on POMDP RL benchmarks from prior work, Mountain Hike and a number of Flickering Atari Games. Further, it significantly outperformed alternative methods in a new, more challenging domain, Natural Flickering Atari Games, as well as for visual navigation using real-world data. We have proposed a novel MGF feature for extracting statistics from an empirical distribution. MGF feature extraction could be applied beyond RL, e.g., for general sequence prediction. DPFRL does not perform well in some particular cases, e.g., DoubleDunk. While our task-oriented discriminative update are less susceptible to complex and noisy observations than a generative model, they do not benefit from an additional learning signal that could improve sample efficiency, e.g., through a reconstruction loss. Future work may combine a generative observation model with the discriminative update in the DPFRL framework. 6 ACKNOWLEDGEMENT This research is partially supported by ONR Global and AFRL grant N62909-18-1-2023. We want to thank Maximilian Igl for suggesting to add videos to the background of Atari games. A BACKGROUND A.1 PARTICLE FILTER ALGORITHM Particle filter is an approximate Bayes filter algorithm for belief tracking. Bayes filters estimate the belief bt, i.e., a posterior distribution of the state st, given the history of actions a1:t and observations o1:t. Instead of explicitly modeling the posterior distribution, particle filter approximates the posterior with a set of weighted particles, bt ≈ {(sit, wit)}Ki=1, and update the particles in a Bayesian manner. Importantly, the particle set could approximate arbitrary distributions, e.g., Gaussians, continuous multi-modal distributions, etc. The mean state can be estimated as the mean particle s̄t = ∑K i=1 w i ts i t. The particle updates include three steps: transition update, measurement update, and resampling. Transition update. We first update the particles by a given motion model. More specifically, we sample the next state sit+1 from a generative transition function sit+1 ∼ p(s | sit, at) (8) where p(s | sit, at) is the transition function. Measurement update. The particle weights are updated again using the observation likelihoods wit+1 = ηp(ot | sit+1)wit, η = 1/ K∑ i=1 wit+1 (9) where η is a normalization factor and p(ot | sit+1) is the observation likelihood computed by evaluating observation ot in a generative observation function p(o | sit+1). Resampling. The particle filter algorithm can suffer from particle degeneracy, where after some update steps only a few particles have non-zero weights. This would prevent particle filter to approximate the posterior distribution effectively. Particle degeneracy is typically addressed by performing resampling, where new particles are sampled with repetition proportional to its weight. Specifically, we sample particles from a categorical distribution p parameterized by the particle weights {wit}Ki=1 p(i) = wit (10) where p(i) is the probability for the i-th category, i.e., the i-th particle. The new particles approximate the same distribution, but they assign more representation capacity to the relevant regions of the state space. A.2 MOMENT-GENERATING FUNCTIONS In probability theory, the moment-generating function (MGF) is an alternative specification of the probability distribution of a real-valued random variable (Bulmer, 1979). As its name suggests, MGF of a random variable could be used to generate any order of its moments, which characterize its probability distribution. Mathematically, the MGF of a random variable X with dimension m is defined by MX(v) = E [ ev >X ] (11) where v ∈ Rm and we could consider the MGF of random variable X is the expectation of the random variable ev >X. Consider the series expansion of ev >X ev >X = 1 + v>X + (v>X)2 2! + . . .+ (v>X)n n! + . . . (12) This leads to the well-known fact that the j-th order moment Mj (j-way tensor) is the j-th order derivative of the MGF at v = 0. Mj = djMX dvj |v=0 (13) In DPFRL, we use MGFs as additional features to provide moment information of the particle distribution. DPFRL learns to extract useful moment features for decision making by directly optimizing for policy p(a | bt). B EXPERIMENT DETAILS B.1 IMPLEMENTATION DETAILS Observation Encoders: For the observation encoders, we used the same structure with DVRL (Igl et al., 2018) for a fair comparison. For Mountain Hike, we use two fully connected layers with batch normalization and ReLU activation as the encoder. The dimension for both layers is 64. For the rest of the domains, we first down-sample the image size to 84×84, then we process images with 3 2D-convolution layers with channel number (32, 64, 32), kernel sizes (8, 4, 3) and stride (4, 2, 1), without padding. The compass and goal information are a vector of length 2; they are appended after the image encoding as the input. Observation Decoders: Both DVRL and PFGRU-generative need observation decoders. For the Mountain Hike, we use the same structure as the encoder with a reversed order. The transposed 2D-convolutional network of the decoder has a reversed structure. The decoder is processed by an additional fully connected layer which outputs the required dimension (1568 for Atari and Habitat Navigation, both of which have 84 × 84 observations). Observation-conditioned transition network: We directly use the transition function in PFGRU (Ma et al., 2019) for ftrans(hit−1, at, ot), which is a stochastic function with GRU gated structure. Action at is first encoded by a fully connected layer with batch normalization and ReLU activation. The encoding dimension for Mountain Hike is 64 and 128 for all the rest tasks. The mean and variance of the normal distribution are learned again by two additional fully connected layers; for the variance, we use Softplus as the activation function. State-observation compatibility network: fobs is implemented by a single fully connected layer without activation. In DVRL, the observation function is parameterized over the full observation space o and p(o | hit−1, ait) is assumed as a multivariate independent Bernoulli distribution whose parameters are again determined by a neural network (Igl et al., 2018). For numerical stability, all the probabilities are stored and computed in the log space and the particle weights are always normalized after each weight update. Soft-resampling: The soft-resampling hyperparameter α is set to be 0.9 for Mountain Hike and 0.5 for the rest of the domains. Note that the soft-resampling is used only for DPFRL, not including DVRL. DVRL averages the particle weights to 1/K after each resampling step, which makes the resampling step cannot be trained by the RL. Belief Summary: The GRU used in DVRL and DPFRL-GRUmerge is a single layer GRU with input dimension equals the dimension of the latent vector plus 1, which is the corresponding particle weight. The dimension of this GRU is exactly the dimension of the latent vector. For the MGF features, we use fully connected layers with feature dimensions as the number of MGF features. The activation function used is the exponential function. We could potentially explore the other activation functions to test the generalized-MGF features, e.g., ReLU. Actor Network and Policy Network: The actor network and policy network are two fully connected layers, which take in the belief summary bt = [ h̄t,M 1:m t ] as input. The output dimension of these two networks are chosen according to the RL tasks. Model Learning: For RL, we use an A2C algorithm with 16 parallel environments for both Mountain Hike and Atari games; for Habitat Navigation, we only use 6 parallel environments due to the GPU memory constraints. The loss function for DPFRL and GRU-based policy is just the standard A2C loss, LA2Ct = LAt + λV LVt + λHLHt , where LAt is the policy loss, LVt is the value loss, LHt is the entropy loss for encouraging exploration, and λV and λH are two hyperparameters. For all experiments, we use λV = 0.5 and λH = 0.01. For DVRL, an additional encoding loss LEt is used to train the sequential VAE, which gives a loss function LDVRLt = L A2C t + λ ELEt . We follow the default setting provided by Igl et al. (2018) and set λE = 0.1. The rest of the hyperparameters, including learning rate, gradient clipping value and α in soft-resampling are tuned according to the BeamRider and directly applied to all domains due to the highly expensive experiment setups. The learning rate for all the networks are searched among the following values: (3× 10−5, 5× 10−5, 1× 10−4, 2× 10−4, 3× 10−4); the gradient clipping value are searched among {0.5, 1.0}; the soft-resampling α is searched among {0.5, 0.9}. The best performing learning rates were 1−4 for DPFRL and GRU, and 2−4 for DVRL; the gradient clipping value for all models was 0.5; the soft-resampling α is set to be 0.9 for Mountain Hike and 0.5 for Atari games. B.2 EXPERIMENTAL SETUP Natural Flickering Atari games We follow the setting of the prior works (Zhu et al., 2018; Igl et al., 2018): 1) 50% of the frames are randomly dropped 2) a frameskip of 4 is used 3) there is a 0.25 chance of repeating an action twice. In our experiments, we sample background videos from the ILSVRC dataset (Russakovsky et al., 2015). Only the videos with the length longer than 500 frames are sampled to make sure the video length is long enough to introduce variability. For each new episode, we first sample a new video from the dataset, and a random starting pointer is sampled in this video. Once the video finishes, the pointer is reset to the first frame (not the starting pointer we sampled) and continues from there. Experiment platform: We implement all the models using PyTorch (Paszke et al., 2017) with CUDA 9.2 and CuDNN 7.1.2. Flickering Atari environments are modified based on OpenAI Gym (Brockman et al., 2016) and we directly use Habitat APIs for visual navigation. Collecting experience and performing gradient updates are done on a single computation node with access to one GPU. For Mountain Hike and Atari games we use NVidia GTX1080Ti GPUs. For Habitat visual navigation we use NVidia RTX2080Ti GPUs. C PF-GRU NETWORK ARCHITECTURE We implement DPFRL with gated transition and observation functions for particle filtering similar to PF-GRU (Ma et al., 2019). In standard GRU, the memory update is implemented by a gated function: ht = (1− zt) ◦ tanh(nt) + zt ◦ ht−1, nt = Wn[rt ◦ ht−1, xt] + bn (14) where Wn and bn are the corresponding weights and biases, and zt rt are the learned gates. PF-GRU introduces stochastic cell update by assuming the update to the memory, nit, follows a parameterized Gaussian distribution nit = Wn[r i t ◦ hit−1, xt] + bn + it, it ∼ N (0,Σit), Σit = WΣ[hit−1, xt] + bΣ (15) With xt = [foenc(ot), f a enc(at)], we implement the transition function h i t+1 ∼ ftrans(hit, ot, at), where foenc is the encoding network for observation and f a enc is the encoding network for the actions. For the observation function, we directly use a fully connected layer fobs(hit, ot) = Wo [ hit, ot ] + bo, where Wo and bo are the corresponding weights and biases. D DPFRL ALGORITHM Algorithm 1: DPFRL Input: Previous belief bt−1 ≈ {(hit−1, wit−1)}Ki=1, observation ot, action at xot ← Encoder(ot) (encode the raw observation) hit ∼ ftrans(hit−1, at, ut(xot )) (transition update) wit ← ηfobs(xot , hit)wit−1, η = 1/ K∑ i=1 wit (observation update) {(h′it , w′it )}Ki=1 ← Soft-Resampling({(hit, wit)}Ki=1) (soft-resampling) h̄t ← K∑ i=1 w′it h ′i t (compute the mean) for j = 1 : m do M jt ← K∑ i=1 w′it exp(v jh′it ) (compute MGF features) end p(a | bt)← π(h̄t,M1:mt ) (compute the policy) V (bt)← V (h̄t,M1:mt ) (compute the value) Output: Updated belief bt ≈ {(h′it , w′it )}Ki=1, policy p(a | bt) and value V (bt) E ADDITIONAL RESULTS E.1 FLICKEIRNG ATARI GAMES PLOTS We provide the accumulated reward curves for Atari experiments in this section. Standard Flickering Atari Games. For the standard Flickering Atari Games we provide the training curves below. Results for DVRL and GRU are directly taken from Igl et al. (2018). 0 1 2 3 4 5 Frames 1e7 20 10 0 10 20 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 2000 4000 6000 8000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 Re tu rn DPFRL DVRL GRU (a) Flickering Pong (b) Flickering ChopperCommand (c) Flickering MsPacman 0 1 2 3 4 5 Frames 1e7 2500 3000 3500 4000 4500 5000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 4000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 200 220 240 260 280 300 Re tu rn DPFRL DVRL GRU (d) Flickering Centipede (e) Flickering BeamRider (f) Flickering Frostbite 0 1 2 3 4 5 Frames 1e7 24 26 28 30 32 34 36 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 10 9 8 7 6 5 4 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0 Re tu rn DPFRL DVRL GRU (g) Flickering Bowling (h) Flickering IceHockey (i) Flickering DoubleDunk 0 1 2 3 4 5 Frames 1e7 1200 1400 1600 1800 2000 2200 Re tu rn DPFRL DVRL GRU (j) Flickering Asteroids Natural Flickering Atari Games. For Natural Flickering Atari Games we report results for a separate validation set, where the background videos are different from the training set. The validation environment steps once after every 100 training iterations. 0 1 2 3 4 5 Frames 1e7 20 10 0 10 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 750 1000 1250 1500 1750 2000 2250 Fe tu rn DPFRL DVRL GRU (a) Natural Flickeirng Pong (b) Natural Flickeirng ChopperCommand (c) Natural Flickeirng MsPacman 0 1 2 3 4 5 Frames 1e7 2250 2500 2750 3000 3250 3500 3750 4000 4250 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 300 350 400 450 500 550 600 650 700 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 75 100 125 150 175 200 225 250 Fe tu rn DPFRL DVRL GRU (d) Natural Flickeirng Centipede (e) Natural Flickeirng BeamRider (f) Natural Flickeirng Frostbite 0 1 2 3 4 5 Frames 1e7 20 22 24 26 28 30 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 13 12 11 10 9 8 7 6 5 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 20 19 18 17 16 15 14 Fe tu rn DPFRL DVRL GRU (g) Natural Flickeirng Bowling (h) Natural Flickeirng IceHockey (i) Natural Flickeirng DoubleDunk 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 1800 2000 2200 Fe tu rn DPFRL DVRL GRU (j) Natural Flickeirng Asteroids E.2 VISUAL NAVIGATION We present the reward curve for the Habitat visual navigation task below. DPFRL outperforms both GRU-based policy and DVRL given the same training time. DVRL struggles with training the observation model and fails during the first half of the training time. GRU based policy learns fast; given only the model-free belief tracker, it struggles to achieve higher reward after a certain point. We only provide the reward curve here as SPL and success rate are only evaluated after the training is finished. E.3 PARTICLE VISUALIZATION WITH PCA We further visualize the latent particles by principal component analysis (PCA) and choose the first 2 components. We choose a trajectory in the Habitat Visual Navigation experiment, where 15 particles are used. We observe that particles initially spread across the space (t = 0). As the robot only receive partial information in the visual navigation task, particles gradually form a distribution with two clusters (t = 56), which represent two major hypotheses of its current state. After more information is incorporated into the belief, they begin to converge and finally become a single cluster (t = 81). We did not observe particle depletion and posterior collapse in our experiment. This could be better avoided by adding an entropy loss to the learned variance of ftrans and we will leave it for future study. 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (a) t = 0 (b) t = 22 (c) t = 30 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (d) t = 56 (e) t = 74 (f) t = 81
1. What are the novel ideas introduced in the paper for training deep reinforcement learning agents with state variables? 2. How does the model handle partially observed environments? 3. Can you explain the observation function f_{obs}(h_t^i, o_t) and how it is trained? 4. How does the policy base its decision on the whole set of particles? 5. Can you provide pseudocode for the algorithm in the appendix to clarify the process? 6. How does the discriminative PF RL algorithm compare to traditional PF in terms of visualizing the map stored in a given particle? 7. Can you comment on the relationship between Monte-Carlo Tree Search in RL agents and the sampling different states in this paper? 8. Would you consider testing the algorithm on more realistic navigation tasks rather than the contrived Atari environment? 9. Are there any missing references in the paper regarding early works on DRL for navigation?
Review
Review Update: my concerns have been addressed and I have updated the score to 8 **** This paper introduces 3 neat ideas for training deep reinforcement learning (DRL) agents with state variables so that they can handle partially observed environments: 1) model the latent state variable as a belief distribution, using a collection of weighted hidden states, like in the particle filter (PF), with an explicit belief update of each particle, calculation of the weight using an observation function, and a differentiable re-weighting function to get the new belief distribution, 2) base the policy on the whole set of particles, by quantifying that set using its mean as well as a collection of K learnable moments (specifically, K Moment Generating Functions, each one corresponding to a dot product between the moment variable and the hidden state of the particle), 3) instead of generating the observations, take again the idea from PF which is to measure the agreement between the current observation o_t and the i-th particle state variable h_t^i, via a learnable discriminative function. From what I understand, the only gradients in the model come from the usual 3 RL losses, and the observation functions in the discriminative PF are trained because they weigh the particles. The model, trained using Advantage Actor Critic (A2C) works well on the (contrived, more on that later) "flickering Natural" Atari RL environment as well as on the Habitat navigation challenge, outperforming both the GRU-based deep RL agent and the Deep Variational RL based agent that uses variational sequence auto-encoders (and extra gradients from the observation function...). The ablation analysis confirms the advantages of the 3 ideas introduced in the paper. The paper is a very well written and the experiments are very well executed. I believe that the idea is novel. I gave this paper only a weak accept because of unclear explanation and of several missed opportunities: * The observation function f_{obs}(h_t^i, o_t) is insufficiently explained. I understood it was trained using discriminative training. Does it mean that different observations o_t are used, and if so, how many? Or is the observation o_t the current observation of the agent, but only the h_t^i change? In which case, what makes it discriminative? Isn't there a posterior collapse, with all particles ending up bearing the same state? Does the function f_{obs} input o_t or u(o_t), where u is the convolutional network? * These questions could be easily answered with pseudocode in the appendix. * In section 3.1, what is the relationship between p_t(i) and f_{obs}(h_t^i, o_t)? * Particle filters in navigation enable to store the history of the observations of the mobile robot, accumulating the laser range scans and matching them to the observations. At the end, one can visualise the map stored in a given particle, as well as visualise the point cloud of the particle coordinates and show the trajectories of these particles. Here the particles contain the hidden states of the agent. Could you similarly to traditional PF, visualise the position of the agent by matching the point cloud {{h_t^i}_i}_t to a set of observations o_k taken from the whole environment, and plotting a 2D map of weights coming from function f_{obs}(h_t, o_k) evaluated over all k? * In the discussion, can you comment on the relationship between Monte-Carlo Tree Search in RL agents (sampling different trajectories) vs. here (sampling different states)? * While I understand the need to use that environment for the sake of comparison to DVRL, the Atari + flickering + natural images dataset is very artificial and contrived. I would be interested in seeing more analysis of the discriminative PF RL algorithm on navigation tasks, given that that's what PF were designed for. Some missing references: * Early references on DRL for navigation: Zhu et al (2016) "Target-driven visual navigation in indoor scenes using deep reinforcement learning" Lample & Chaplot (2016) "Playing FPS games with deep reinforcement learning" Mirowski et al (2016) "Learning to navigate in complex environments"
ICLR
Title Discriminative Particle Filter Reinforcement Learning for Complex Partial observations Abstract Deep reinforcement learning is successful in decision making for sophisticated games, such as Atari, Go, etc. However, real-world decision making often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for complex partial observations. DPFRL encodes a differentiable particle filter in the neural network policy for explicit reasoning with partial observations over time. The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making. We show that using the discriminative update instead of standard generative models results in significantly improved performance, especially for tasks with complex visual observations, because they circumvent the difficulty of modeling complex observations that are irrelevant to decision making. In addition, to extract features from the particle belief, we propose a new type of belief feature based on the moment generating function. DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark introduced in this paper. Further, DPFRL performs well for visual navigation with real-world data in the Habitat environment. The code is available online 1. 1 INTRODUCTION Deep Reinforcement Learning (DRL) has attracted significant interest, with applications ranging from game playing (Mnih et al., 2013; Silver et al., 2017) to robot control and visual navigation (Levine et al., 2016; Kahn et al., 2018; Savva et al., 2019). However, natural real-world environments remain challenging for current DRL methods (Arulkumaran et al., 2017), in part because they require (i) reasoning in a partially observable environment and (ii) reasoning with complex observations, such as visually rich images. Consider, for example, a robot, navigating in an indoor environment, with a camera for visual perception. To determine its own location and a traversable path, the robot must extract from image pixels relevant geometric features, which often coexist with irrelevant visual features, such as wall textures, shadows, etc. Further, the task is partially observable: a single image at the current time does not provide sufficient features for localization, and the robot must integrate information from the history of visual inputs received. The partially observable Markov decision process (POMDP) provides a principled general framework for decision making under partial observability. Solving POMDPs requires tracking a sufficient statistic of the action-observation history, e.g., the posterior distribution of the states, called the belief. Most POMDP reinforcement learning (RL) methods summarize the history into a vector using a recurrent neural network (RNN) (Hausknecht & Stone, 2015; Zhu et al., 2018). RNNs are model-free generic function approximators. Without appropriate structural priors, they need large amounts of training data to learn to track a complex belief well. Model-based DRL methods aim to reduce the sample complexity by learning a model together with a policy. In particular, to deal with partial observability, Igl et al. (2018) recently proposed DVRL, which learns a generative observation model embedded into the policy through a Bayes filter. Since 1https://github.com/Yusufma03/DPFRL the Bayes filter tracks the belief explicitly, DVRL performs much better than generic RNNs under partial observability. However, a Bayes filter normally assumes a generative observation model, that defines the probability p(o | ht) of receiving an observation o = ot given the latent state ht (Fig. 1b). Learning this model can be very challenging since it requires modeling all observation features, including features irrelevant for RL. When o is an image, p(o | ht) is a distribution over all possible images. This means, e.g., to navigate in a previously unseen environment, we need to learn the distribution of all possible environments with their visual appearance, lighting condition, etc. — a much harder task than learning to extract features relevant to navigation, e.g., the traversable space. We introduce the Discriminative Particle Filter Reinforcement Learning (DPFRL), a POMDP RL method that learns to explicitly track a belief over the latent state without a generative observation model, and make decisions based on features of the belief (Fig. 1a). DPFRL approximates the belief by a set of weighted learnable latent particles {(hit, wit)}Ki=1, and it tracks this particle belief by a nonparametric Bayes filter algorithm, an importance weighted particle filter, encoded as a differentiable computational graph in the neural network architecture. The importance weighted particle filter applies discriminative update to the belief with an observation-conditioned transition model and a discriminative state-observation compatibility function (serving as the importance weights), both of which are learnable neural networks trained end-to-end. By using these update functions instead of the transition and observation models of the standard particle filter, DPFRL sidesteps the difficulty of learning a generative observation model (Fig. 1b). The model is discriminative in the sense that the compatibility function, fobs(ot, ht), as shown in Fig. 1c, while playing an analogue role as p(ot | ht), is not required to directly represent a normalized distribution over observations; and through end-toend training it only needs to model observation features relevant for the RL task. Finally, to summarize the particle belief for the policy, we introduce novel learnable features based on Moment-Generating Functions (MGFs) (Bulmer, 1979). MGF features are computationally efficient and permutation invariant, and they can be directly optimized to provide useful higher-order moment information for learning a policy. MGF features could be also used as learned features of any empirical distribution in applications beyond RL. We evaluate DPFRL on a range of POMDP RL domains: a continuous control task from Igl et al. (2018), Flickering Atari Games (Hausknecht & Stone, 2015), Natural Flickering Atari Games, a new domain with more complex observations that we introduce, and the Habitat visual navigation domain using real-world data (Savva et al., 2019). DPFRL outperforms state-of-the-art POMDP RL methods in most cases. Results show that belief tracking with a particle filter is effective for handling partial observability, and the discriminative update and MGF-based belief features allow for complex observations. 2 RELATED WORK Real-world decision-making problems are often formulated as POMDPs. POMDPs are notoriously hard to solve; in the worst case, they are computationally intractable (Papadimitriou & Tsitsiklis, 1987). Approximate POMDP solvers have made dramatic progress in solving large-scale POMDPs (Kurniawati et al., 2008). Particle filters have been widely adopted as a belief tracker for POMDP solvers (Silver & Veness, 2010; Somani et al., 2013) having the flexibility to model complex and multi-modal distributions, unlike Gaussian and Kalman filters. However, predefined model and state representations are required for these methods (see e.g. Bai et al. (2015)). Given the advances in generative neural network models, various neural models have been proposed for belief tracking (Chung et al., 2015; Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018). DVRL (Igl et al., 2018) uses a Variational Sequential Monte-Carlo method (Naesseth et al., 2018), similar to the particle filter we use, for belief tracking in RL. This gives better belief tracking capabilities, but as we demonstrate in our experiments, generative modeling is not robust in complex observation spaces with high-dimensional irrelevant observation. More powerful generative models, e.g., DRAW (Gregor et al., 2015), could be considered to improve generative observation modeling; however, evaluating a complex generative model for each particle would significantly increase the computational cost and optimization difficulty. Learning a robust latent representation and avoiding reconstructing observations are of great interest for RL (Oord et al., 2018; Guo et al., 2018; Hung et al., 2018; Gregor et al., 2019; Gelada et al., 2019). Discriminative RNNs have also been widely used for belief approximation in partially observable domains (Bakker, 2002; Wierstra et al., 2007; Foerster et al., 2016). The latent representation is directly optimized for the policy p(a|ht) that skips observation modeling. For example, Hausknecht & Stone (2015) and Zhu et al. (2018) tackle partially observable Flickering Atari Games by extending DQN (Mnih et al., 2013) with an LSTM memory. Our experiments demonstrate that the additional structure for belief tracking provided by a particle filter can give improved performance in RL. Embedding algorithms into neural networks to allow end-to-end discriminative training has gained attention recently. For belief tracking, the idea has been used in the differentiable histogram filter (Jonschkowski & Brock, 2016), Kalman filter (Haarnoja et al., 2016) and particle filter (Karkus et al., 2018; Jonschkowski et al., 2018). Further, Karkus et al. (2017) combined a learnable histogram filter with the Value Iteration Network (Tamar et al., 2016) and introduced a learnable POMDP planner, QMDP-net. However, these methods require a predefined state representation and are limited to relatively small state spaces. Ma et al. (2019) integrated the particle filter with standard RNNs, e.g., the LSTM, and introduced PF-RNNs for sequence prediction. We build on the work of Ma et al. (2019) and demonstrate its advantages for RL with complex partial observations, and extend it with MGF features for improved decision making from particle beliefs. Note that our framework is not specific to PF-RNNs, and could be applied to other differentiable particle filters as well. 3 DISCRIMINATIVE PARTICLE FILTER REINFORCEMENT LEARNING We introduce DPFRL for reinforcement learning under partial and complex observations. The DPFRL architecture is shown in Fig. 2. It has two main components, a discriminatively trained particle filter that tracks a latent belief bt, and an actor network that learns a policy p(a | bt) given the belief bt. 3.1 PARTICLE FILTER FOR LATENT BELIEF TRACKING Latent State Representation. In POMDPs the semantics of states s is typically defined explicitly. State variables may correspond to the position of a robot, configuration of obstacles, etc. In DPFRL, we do not require explicit specification of the state variables, but implicitly represent the state as a vector h of latent variables, that is, the semantics of the state variables are learned instead of being pre-specified. We use a fully differentiable particle filter algorithm to maintain a belief over h. More specifically, we approximate the belief with a set of weighted latent particles bt ≈ {(hit, wit)}Ki=1, where {hit}Ki=1 are K latent states learned by policy-oriented training, and {wit}Ki=1 represents the corresponding weights. Each latent state hit stands for a hypothesis in the belief; the set of latent particles provide an approximate representation for the belief. Belief Update. In a basic particle filter, there are two key steps to update a particle belief {hit−1, wit−1}Ki=1 to a new particle belief {hit, wit}Ki=1 upon receiving an observation ot after executing action at. hit ∼ p(h | hit−1, at), (1) wit = ηp(ot | hit)wit−1, η = 1/ΣKi=1p(ot | hit)wit−1 (2) The first step, Eq. 1, takes the transition dynamics into account to update each particle. The second step, Eq. 2, takes the observation into account to reweigh the new particles. Our belief update has a similar structure as the standard particle filter, but we replace the transition model and the observation model with richer functions to make the update more suitable for learning a policy in a partially observable domain. Specifically, the update equations are as follows. hit ∼ ftrans(hit−1, at, ot), (3) wit = ηfobs(h i t, ot)w i t−1, η = 1/Σ K i=1fobs(h i t, ot)w i t−1, (4) {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1) (5) Below, we first explain the intuition behind the above updates and the roles of ftrans and fobs as compared to the standard transition and observation models. We then derive that above rules from an importance weighed particle filter in Sect. 3.2. Observation-conditioned transition update. Eq. 3 takes a form more general than that in Eq. 2: instead of using the transition dynamics p(h | hit−1, at) to evolve a particle, we use a more general observation-conditioned transition ftrans(h | hit−1, at, ot). Incorporating the observation allows alleviating the problem of sampling unlikely particles. In fact, if we take ftrans to be p(h | hit−1, at, ot), then this allows us to skip Eq. 2, and completely avoids sampling particles that are likely considering at only, but unlikely considering both at and ot. Of course, in RL we do not have access to p(h | hit−1, at, ot), and instead ftrans is learned. In our implementation, a network first extracts features from ot, they are fed to a gated function following the PF-GRU of Ma et al. (2019), which outputs the mean and variance of a normal distribution. Details are in the Appendix. Importance weighting via a compatibility function. Eq. 4 is a relaxed version of Eq. 2: instead of using the observation model p(ot | hit) to adjust the particle weights based on their compatibility with the observation, we use a general non-negative compatibility function fobs(hit, ot). If the compatibility function is required to satisfy the normalization constraint that ∑ o fobs(h, o) is a constant for all h, then it is equivalent to a conditional distribution of o given h. We do not require this, and thus the update loses the probabilistic interpretation in Eq. 2. However, eliminating the need for the normalization constraint allows the compatibility function to be efficiently trained, as we can avoid computing the normalization constant. In addition, since the observation has already been incorporated in Eq. 3, we actually expect that the weights need to be adjusted in a way different from the standard particle filter. In our implementation, fobs(hit, ot) is a neural network with a single fully connected layer that takes in a concatenation of hit and features extracted from ot. The output of the network is interpreted as the log of fobs; and for numerical stability we perform the weight updates of Eq. 4 in the log-space as well. Note that more complex network architectures could improve the capability of fobs, which we leave to future work. Soft-resampling. To avoid particle degeneracy, i.e., most of the particles having a near-zero weight, particle filters typically resample particles. We adopt the soft-resampling strategy of Karkus et al. (2018); Ma et al. (2019), that provides approximate gradients for the non-differentiable resampling step. Instead of sampling from pt(i) = wit, we sample particles {h′it }Ki=1 from a softened proposal distribution q(i) = αwit + (1 − α)1/K, where α is an trade-off parameter. The new weights are derived using importance sampling: w′it = wit αwit+(1−α)1/K . We can have the final particle belief as {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1). As a result, fobs can be optimized with global belief information and model shared useful features across multiple time steps. Another related concern is that the particle distribution may collapse to particles with the same latent state. This can be avoided by ensuring that the stochastic transition function ftrans has a non-zero variance, e.g., by adding a small constant to the learned variance. End-to-end training. In DPFRL the observation-conditioned transition function ftrans and the compatibility function fobs are learned. Instead of training for a modeling objective, they are trained end-to-end for the final RL objective, backpropagating gradients through the belief-conditional policy p(a | bt) and the update steps of the particle filter algorithm, Eq. 3-5. 3.2 CONNECTION TO IMPORTANCE WEIGHTED PARTICLE FILTER Our belief update can be motivated from the following importance weighted particle filter. Learning directly p(h′ | h, a, o) is generally difficult, but if we have a distribution q(h′ | h, a, o) that is easy to learn, then we can use importance sampling to update a particle belief. hit ∼ q(hit−1, at, ot), (6) wit = ηf(h i t, h i t−1, at, ot)w i t−1, η = 1/Σ K i=1f(h i t, h i t−1, at, ot)w i t−1 (7) where f = p/q is the importance weight. Consider the case that q(h′ | h, a, o) is the conditional distribution of a joint distribution q(h′, h, a, o) of the form p(h′ | h, a)q(o | h′). That is, p and q share the same transition dynamics p(h′ | h, a). Then the importance weight f is a function of h′ and o only, because f(h′, h, a, o) = p(h′ | h, a, o) q(h′ | h, a, o) = p(h′ | h, a)p(o | h′) p(h′ | h, a)q(o | h′) = p(o | h′) q(o | h′) . This simpler form is exactly the form that we used for fobs in our belief update. 3.3 DISCRIMINATIVE VS. GENERATIVE MODELING We expect the discriminative compatibility function to be more effective than a generative model for the following reasons. A generative model aims to approximate p(o | h) by learning a function that takes h as input and outputs a parameterized distribution over o. When o is, e.g., an image, this requires approximations, e.g., using pixel-wise Gaussians with learned mean and variance. This model is also agnostic to the RL task and considers all observation features equally, including features irrelevant for filtering and decision making. In contrast, fobs takes o and h as inputs, and estimates the compatibility of o and h for particle filtering directly. This function avoids forming a parametric distribution over o, and the function can be easier to learn. The same functional form is used for the energy-function of energy-based models (LeCun et al., 2006) and in contrastive predictive coding (Oord et al., 2018), with similar benefits. For example, fobs may learn unnormalized likelihoods that are only proportionate to p(o | h) up to a o-dependent value, because after the normalization in Eq. 4, they would give the same belief update as the normalized p(o | h). Further, because fobs is trained for the final RL objective instead of a modeling objective, it may learn a compatibility function that is useful for decision making, but that does not model all observation features and has no proper probabilistic interpretation. While the task-oriented training of discriminative models may improve policy performance for the reasons above, it cannot take advantage of an auxiliary learning signal like the reconstruction objective of a generative model. An interesting line of future work may combine generative models with a compatibility function to simultaneously benefit from both formulations. 3.4 BELIEF-CONDITIONAL ACTOR NETWORK Conditioning a policy directly onto a particle belief is non-trivial. To feed it to the networks, we need to summarize it into a single vector. We introduce a novel feature extraction method for empirical distributions based on MomentGenerating Functions (MGFs). The MGF of an n-dimensional random variable X is given by MX(v) = E[ev >X],v ∈ Rn. In statistics, MGF is an alternative specification of its probability distribution (Bulmer, 1979). Since particle belief bt is an empirical distribution, the moment generating function of bt can be denoted as Mbt(v) = ∑K i=1 w i te v>hit . A more detailed background on MGFs is in Appendix A.2. In DPFRL, we use the values of the MGF at m learned locations v1:m as the feature vector of the MGF. The j-th MGF feature is given by M jbt(v j). For a clean notation, we use M jt in place of M jbt(v j). We use [ h̄t,M 1:m t ] as features for belief bt, where h̄t = ∑K i=1 w i th i t is the mean particle. The mean particle h̄t, as the first-order moment, and m additional MGF features, give a summary of the belief characteristics. The number of MGF features, m, controls how much additional information we extract from the belief. We empirically study the influence of MGF features in ablation studies. Compared to Ma et al. (2019) that uses the mean as the belief estimate, MGF features provide additional features from the empirical distribution. Compared to DVRL (Igl et al., 2018) that treats the Monte-Carlo samples as a sequence and merges them by an RNN, MGF features are permutationinvariant, computationally efficient and easy to optimize, especially when the particle set is large. Given the features [ h̄t,M 1:m t ] for bt, we compute the policy p(a | bt) with a policy network π(bt). We trained with an actor-critic RL algorithm, A2C (Mnih et al., 2016), where a value network V (bt) is introduced to assist learning. We use small fully-connected networks for π(bt) and V (bt) that share the same input bt. 4 EXPERIMENTS We evaluate DPFRL in a range of POMDP RL domains with increasing belief tracking and observation modeling complexity. We first use benchmark domains from the literature, Mountain Hike, and 10 different Flickering Atari Games. We then introduce a new, more challenging domain, Natural Flickering Atari Games, that uses a random video stream as the background. Finally we apply DPFRL to a challenging visual navigation domain with RGB-D observations rendered from real-world data. We compare DPFRL with a GRU network, a state-of-the-art POMDP RL method, DVRL, and ablations of the DPFRL architecture. As a brief conclusion, we show that: 1) DPFRL significantly outperforms GRU in most cases because of its explicit structure for belief tracking; 2) DPFRL outperforms the state-of-the-art DVRL in most cases even with simple observations, and its benefit increases dramatically with more complex observations because of DPFRL’s discriminative update; 3) MGF features are more effective for summarizing the latent particle belief than alternatives. 4.1 EXPERIMENTAL SETUP We train DPFRL and baselines with the same A2C algorithm, and use a similar network architecture and hyperparameters as the original DVRL implementation. DPFRL and DVRL differ in the particle belief update structure, but they use the same latent particle size dim(h) and the same number of particles K as in the DVRL paper (dim(h) = 128 and K = 30 for Mountain Hike, dim(h) = 256 and K = 15 for Atari games and visual navigation). The effect of the number of particles is discussed in Sect. 4.5. We train all models for the same number of iterations using the RMSProp optimizer (Tieleman & Hinton, 2012). Learning rates and gradient clipping values are chosen based on a search in the BeamRider Atari game independently for each model. Further details are in the Appendix. We have not performed additional searches for the network architecture and other hyper-parameters, nor tried other RL algorithm, such as PPO (Schulman et al., 2017), which may all improve our results. All reported results are averages over 3 different random seeds. We plot rewards accumulated in an episode, same as DVRL (Igl et al., 2018). The curves are smoothed over time and averaged over parallel environment executions. 4.2 MOUNTAIN HIKE Mountain Hike was introduced by Igl et al. (2018) to demonstrate the benefit of belief tracking for POMDP RL. It is a continuous control problem where an agent navigates on a fixed 20× 20 map. In the original task, partial observability is introduced by disturbing the agent observation with an additive Gaussian noise. To illustrate the effect of observation complexity in natural environments, we concatenate the original observation vector with a random noise vector. The complexity of the optimal policy remains unchanged, but the relevant information is now coupled with irrelevant observation features. More specifically, the state space and action space in Mountain Hike are defined as S = A = R2, where st = [xt, yt] and at = [δxt, δyt]. Transitions of the agent are stochastic with an additive Gaussian noise: st+1 = st + at + a, where a ∼ N (0, 0.25). The observation space is O = R2+l, where l is a predefined constant and l = 0 corresponds to the original setting. Observations are ot = [ost , o n t ], where o s t = st + s, s ∼ N (0, 1), and ont ∈ Rl is sampled from a uniform distribution U(−10, 10). The reward for each step is given by rt = r(xt, yt) − 0.01||at|| where r(xt, yt) is shown in Fig. 3. Episodes end after 75 steps. We train models for different settings of the noise vector length l, from l = 0 to l = 100. Results are shown in Fig. 4. We observe that DPFRL learns faster than the DVRL and GRU in all cases, including the original setting l = 0. Importantly, as the noise vector length increases, the performance of DVRL and GRU degrades, while DPFRL is unaffected. This demonstrates the ability of DPFRL to track a latent belief without having to explicitly model complex observations. 4.3 ATARI GAMES WITH PARTIAL OBSERVABILITY Atari games are one of the most popular benchmark domains for RL methods (Mnih et al., 2013). Their partially observable variants, Flickering Atari Games, have been used to benchmark POMDP RL methods (Hausknecht & Stone, 2015; Zhu et al., 2018; Igl et al., 2018). Here image observations are single frames randomly replaced by a blank frame with a probability of 0.5. The flickering observations introduce a simple form of partial observability. Another variant, Natural Atari Games (Zhang et al., 2018), replaces the simple black background of the frames of an Atari game with a randomly sampled video stream. This modification brings the Atari domain one step closer to the visually rich real-world, in that the relevant information is now encoded in complex observations. As shown by Zhang et al. (2018), this poses a significant challenge for RL. We propose a new RL domain, Natural Flickering Atari Games, that involves both challenges: partial observability simulated by flickering frames, and complex observations simulated by random background videos. The background videos increase observation complexity without affecting the decision making complexity, making this a suitable domain for evaluating RL methods with complex observations. We sample the background video from the ILSVRC dataset (Russakovsky et al., 2015). Examples for the BeamRider game are shown in Fig. 5. Details are in Appendix B. We evaluate DPFRL for both Flickering Atari Games and Natural Flickering Atari Games. We use the same set of games as Igl et al. (2018). To ensure a fair comparison, we take the GRU and DVRL results from the paper for Flickering Atari Games, use the same training iterations as in Igl et al. (2018), and we use the official DVRL open source code to train for Natural Flickering Atari Games. Results are summarized in Table 1. We highlight the best performance in bold where the difference is statistically significant (p = 0.05). Detailed training curves are in Appendix E. We observe that DPFRL significantly outperforms GRU in almost all games, which indicates the importance of explicit belief tracking, and shows that DPFRL can learn a useful latent belief representation. Despite the simpler observations, DPFRL significantly outperforms DVRL and achieves state-of-the-art results on 5 out of 10 standard Flickering Atari Games (ChopperCommand, MsPacman, BeamRider, Bowling, Asteroids), and it performs comparably in 3 other games (Centipede, Frostbite, IceHockey). The strength of DFPRL shows even more clearly in the Natural Flickering Atari Games, where it significantly outperforms DVRL on 7 out of 10 games and performs similarly in the rest. In some games, e.g. in Pong, DPFRL performs similarly with and without videos in the background (15.65 vs. 15.40), while the DVRL performance degrades substantially (-19.78 vs. 18.17). These results show that while the architecture of DPFRL and DVRL are similar, the policy-oriented discriminative update of DPFRL is much more effective for handling complex observations, and the MGF features provide a more powerful summary of the particle belief for decision making. However, on some games, e.g. on ChopperCommand, even DPFRL performance drops significantly when adding background videos. This shows that irrelevant features can make a task much harder, even for a discriminative approach, as also observed by Zhang et al. (2018). 4.4 VISUAL NAVIGATION Figure 6: RGB-D Habitat Observations Table 2: Visual Navigation Results SPL Success Rate Reward DPFRL 0.79 0.88 12.82±5.82 DVRL 0.09 0.11 5.22±2.24 GRU 0.63 0.74 10.14±2.82 PPO(Savva et al., 2019) 0.70 0.80 — Visual navigation poses a great challenge for deep RL (Mirowski et al., 2016; Zhu et al., 2017; Lample & Chaplot, 2017). We evaluate DPFRL for visual navigation in the Habitat Environment (Savva et al., 2019), using the real-world Gibson dataset (Xia et al., 2018). In this domain, a robot needs to navigate to goals in previously unseen environments. In each time step, it receives a first-person RGB-D camera image and its distance and relative orientation to the goal. The main challenge lies in the partial and complex observations: first-person view images only provide partial information about the unknown environment; and the relevant information for navigation, traversability, is encoded in rich RGB-D observations along with many irrelevant features, e.g., the texture of the wall. We use the Gibson dataset with the training and validation split provided by the Habitat challenge. We train models with the same architecture as for the Atari games, except for the observation encoder that accounts for the different observation format. We evaluate models in unseen environments from the validation split and compute the same metrics as in the literature: SPL, success rate, and average rewards. Results are shown in Table 2. Further details and results are in Appendix B and E. DPFRL significantly outperforms both DVRL and GRU in this challenging domain. DVRL performs especially poorly, demonstrating the difficulty of learning a generative observation model in realistic, visually rich domains. DPFRL also outperforms the PPO baseline from Savva et al. (2019). We note that submissions to the recently organized Habitat Challenge 2019 (Savva et al., 2019), such as (Chaplot et al., 2019), have demonstrated better performance than the PPO baseline (while our results are not directly comparable because of the closed test set of the competition). However, these approaches rely on highly specialized structures, such as 2D mapping and 2D path planning, while we use the same generic network as for Atari games. Future work may further improve our results by adding a task-specific structure to DPFRL or training with PPO instead of A2C. 4.5 ABLATION STUDY We conduct an extensive ablation study on the Natural Flickering Atari Games to understand the influence of each DPFRL component. The results are presented in Table 3. The discriminative compatibility function is more effective than a generative observation function. DPFRL-generative replaces the discriminative compatibility function of DPFRL with a generative observation function, where grayscale image observations are modeled by pixel-wise Gaussian distributions with learned mean and variance. Unlike DVRL, DPFRL-generative only differs from DPFRL in the parameterization of the observation function, the rest of the architecture and training loss remains the same. In most cases, the performance for DPFRL-generative degrades significantly compared to DPFRL. These results are aligned with our earlier observations, and indicate that the compatibility function is capable of extracting the relevant information from complex observations without having to learn a more complex generative model. More particles perform better. DPFRL with 1 particle performs poorly on most of the tasks (DPFRLLP1). This indicates that a single latent state is insufficient to represent a complex latent distribution that is required for the task, and that more particles may improve performance. MGF features are useful. We compare DPFRL using MGF features with DPFRL-mean, that only uses the mean particle; and with DPFRL-GRUmerge, that uses a separate RNN to summarize the belief, similar to DVRL. Results show that DPFRL-mean does not work as well as the standard DPFRL, especially for tasks that may need complex belief tracking, e.g., Pong. This can be attributed to the more rich belief statistics provided by MGF features, and that they do not constrain the learned belief representation to be always meaningful when averaged. Comparing to DPFRL-GRUmerge shows that MGF features generally perform better. While an RNN may learn to extract useful features from the latent belief, optimizing the RNN parameters is harder, because they are not permutation invariant to the set of particles and they result in a long backpropagation chain. 5 CONCLUSION We have introduced DPFRL, a framework for POMDP RL in natural environments. DPFRL combines the strength of Bayesian filtering and end-to-end RL: it performs explicit belief tracking with learnable particle filters optimized directly for the RL policy. DPFRL achieved state-of-the-art results on POMDP RL benchmarks from prior work, Mountain Hike and a number of Flickering Atari Games. Further, it significantly outperformed alternative methods in a new, more challenging domain, Natural Flickering Atari Games, as well as for visual navigation using real-world data. We have proposed a novel MGF feature for extracting statistics from an empirical distribution. MGF feature extraction could be applied beyond RL, e.g., for general sequence prediction. DPFRL does not perform well in some particular cases, e.g., DoubleDunk. While our task-oriented discriminative update are less susceptible to complex and noisy observations than a generative model, they do not benefit from an additional learning signal that could improve sample efficiency, e.g., through a reconstruction loss. Future work may combine a generative observation model with the discriminative update in the DPFRL framework. 6 ACKNOWLEDGEMENT This research is partially supported by ONR Global and AFRL grant N62909-18-1-2023. We want to thank Maximilian Igl for suggesting to add videos to the background of Atari games. A BACKGROUND A.1 PARTICLE FILTER ALGORITHM Particle filter is an approximate Bayes filter algorithm for belief tracking. Bayes filters estimate the belief bt, i.e., a posterior distribution of the state st, given the history of actions a1:t and observations o1:t. Instead of explicitly modeling the posterior distribution, particle filter approximates the posterior with a set of weighted particles, bt ≈ {(sit, wit)}Ki=1, and update the particles in a Bayesian manner. Importantly, the particle set could approximate arbitrary distributions, e.g., Gaussians, continuous multi-modal distributions, etc. The mean state can be estimated as the mean particle s̄t = ∑K i=1 w i ts i t. The particle updates include three steps: transition update, measurement update, and resampling. Transition update. We first update the particles by a given motion model. More specifically, we sample the next state sit+1 from a generative transition function sit+1 ∼ p(s | sit, at) (8) where p(s | sit, at) is the transition function. Measurement update. The particle weights are updated again using the observation likelihoods wit+1 = ηp(ot | sit+1)wit, η = 1/ K∑ i=1 wit+1 (9) where η is a normalization factor and p(ot | sit+1) is the observation likelihood computed by evaluating observation ot in a generative observation function p(o | sit+1). Resampling. The particle filter algorithm can suffer from particle degeneracy, where after some update steps only a few particles have non-zero weights. This would prevent particle filter to approximate the posterior distribution effectively. Particle degeneracy is typically addressed by performing resampling, where new particles are sampled with repetition proportional to its weight. Specifically, we sample particles from a categorical distribution p parameterized by the particle weights {wit}Ki=1 p(i) = wit (10) where p(i) is the probability for the i-th category, i.e., the i-th particle. The new particles approximate the same distribution, but they assign more representation capacity to the relevant regions of the state space. A.2 MOMENT-GENERATING FUNCTIONS In probability theory, the moment-generating function (MGF) is an alternative specification of the probability distribution of a real-valued random variable (Bulmer, 1979). As its name suggests, MGF of a random variable could be used to generate any order of its moments, which characterize its probability distribution. Mathematically, the MGF of a random variable X with dimension m is defined by MX(v) = E [ ev >X ] (11) where v ∈ Rm and we could consider the MGF of random variable X is the expectation of the random variable ev >X. Consider the series expansion of ev >X ev >X = 1 + v>X + (v>X)2 2! + . . .+ (v>X)n n! + . . . (12) This leads to the well-known fact that the j-th order moment Mj (j-way tensor) is the j-th order derivative of the MGF at v = 0. Mj = djMX dvj |v=0 (13) In DPFRL, we use MGFs as additional features to provide moment information of the particle distribution. DPFRL learns to extract useful moment features for decision making by directly optimizing for policy p(a | bt). B EXPERIMENT DETAILS B.1 IMPLEMENTATION DETAILS Observation Encoders: For the observation encoders, we used the same structure with DVRL (Igl et al., 2018) for a fair comparison. For Mountain Hike, we use two fully connected layers with batch normalization and ReLU activation as the encoder. The dimension for both layers is 64. For the rest of the domains, we first down-sample the image size to 84×84, then we process images with 3 2D-convolution layers with channel number (32, 64, 32), kernel sizes (8, 4, 3) and stride (4, 2, 1), without padding. The compass and goal information are a vector of length 2; they are appended after the image encoding as the input. Observation Decoders: Both DVRL and PFGRU-generative need observation decoders. For the Mountain Hike, we use the same structure as the encoder with a reversed order. The transposed 2D-convolutional network of the decoder has a reversed structure. The decoder is processed by an additional fully connected layer which outputs the required dimension (1568 for Atari and Habitat Navigation, both of which have 84 × 84 observations). Observation-conditioned transition network: We directly use the transition function in PFGRU (Ma et al., 2019) for ftrans(hit−1, at, ot), which is a stochastic function with GRU gated structure. Action at is first encoded by a fully connected layer with batch normalization and ReLU activation. The encoding dimension for Mountain Hike is 64 and 128 for all the rest tasks. The mean and variance of the normal distribution are learned again by two additional fully connected layers; for the variance, we use Softplus as the activation function. State-observation compatibility network: fobs is implemented by a single fully connected layer without activation. In DVRL, the observation function is parameterized over the full observation space o and p(o | hit−1, ait) is assumed as a multivariate independent Bernoulli distribution whose parameters are again determined by a neural network (Igl et al., 2018). For numerical stability, all the probabilities are stored and computed in the log space and the particle weights are always normalized after each weight update. Soft-resampling: The soft-resampling hyperparameter α is set to be 0.9 for Mountain Hike and 0.5 for the rest of the domains. Note that the soft-resampling is used only for DPFRL, not including DVRL. DVRL averages the particle weights to 1/K after each resampling step, which makes the resampling step cannot be trained by the RL. Belief Summary: The GRU used in DVRL and DPFRL-GRUmerge is a single layer GRU with input dimension equals the dimension of the latent vector plus 1, which is the corresponding particle weight. The dimension of this GRU is exactly the dimension of the latent vector. For the MGF features, we use fully connected layers with feature dimensions as the number of MGF features. The activation function used is the exponential function. We could potentially explore the other activation functions to test the generalized-MGF features, e.g., ReLU. Actor Network and Policy Network: The actor network and policy network are two fully connected layers, which take in the belief summary bt = [ h̄t,M 1:m t ] as input. The output dimension of these two networks are chosen according to the RL tasks. Model Learning: For RL, we use an A2C algorithm with 16 parallel environments for both Mountain Hike and Atari games; for Habitat Navigation, we only use 6 parallel environments due to the GPU memory constraints. The loss function for DPFRL and GRU-based policy is just the standard A2C loss, LA2Ct = LAt + λV LVt + λHLHt , where LAt is the policy loss, LVt is the value loss, LHt is the entropy loss for encouraging exploration, and λV and λH are two hyperparameters. For all experiments, we use λV = 0.5 and λH = 0.01. For DVRL, an additional encoding loss LEt is used to train the sequential VAE, which gives a loss function LDVRLt = L A2C t + λ ELEt . We follow the default setting provided by Igl et al. (2018) and set λE = 0.1. The rest of the hyperparameters, including learning rate, gradient clipping value and α in soft-resampling are tuned according to the BeamRider and directly applied to all domains due to the highly expensive experiment setups. The learning rate for all the networks are searched among the following values: (3× 10−5, 5× 10−5, 1× 10−4, 2× 10−4, 3× 10−4); the gradient clipping value are searched among {0.5, 1.0}; the soft-resampling α is searched among {0.5, 0.9}. The best performing learning rates were 1−4 for DPFRL and GRU, and 2−4 for DVRL; the gradient clipping value for all models was 0.5; the soft-resampling α is set to be 0.9 for Mountain Hike and 0.5 for Atari games. B.2 EXPERIMENTAL SETUP Natural Flickering Atari games We follow the setting of the prior works (Zhu et al., 2018; Igl et al., 2018): 1) 50% of the frames are randomly dropped 2) a frameskip of 4 is used 3) there is a 0.25 chance of repeating an action twice. In our experiments, we sample background videos from the ILSVRC dataset (Russakovsky et al., 2015). Only the videos with the length longer than 500 frames are sampled to make sure the video length is long enough to introduce variability. For each new episode, we first sample a new video from the dataset, and a random starting pointer is sampled in this video. Once the video finishes, the pointer is reset to the first frame (not the starting pointer we sampled) and continues from there. Experiment platform: We implement all the models using PyTorch (Paszke et al., 2017) with CUDA 9.2 and CuDNN 7.1.2. Flickering Atari environments are modified based on OpenAI Gym (Brockman et al., 2016) and we directly use Habitat APIs for visual navigation. Collecting experience and performing gradient updates are done on a single computation node with access to one GPU. For Mountain Hike and Atari games we use NVidia GTX1080Ti GPUs. For Habitat visual navigation we use NVidia RTX2080Ti GPUs. C PF-GRU NETWORK ARCHITECTURE We implement DPFRL with gated transition and observation functions for particle filtering similar to PF-GRU (Ma et al., 2019). In standard GRU, the memory update is implemented by a gated function: ht = (1− zt) ◦ tanh(nt) + zt ◦ ht−1, nt = Wn[rt ◦ ht−1, xt] + bn (14) where Wn and bn are the corresponding weights and biases, and zt rt are the learned gates. PF-GRU introduces stochastic cell update by assuming the update to the memory, nit, follows a parameterized Gaussian distribution nit = Wn[r i t ◦ hit−1, xt] + bn + it, it ∼ N (0,Σit), Σit = WΣ[hit−1, xt] + bΣ (15) With xt = [foenc(ot), f a enc(at)], we implement the transition function h i t+1 ∼ ftrans(hit, ot, at), where foenc is the encoding network for observation and f a enc is the encoding network for the actions. For the observation function, we directly use a fully connected layer fobs(hit, ot) = Wo [ hit, ot ] + bo, where Wo and bo are the corresponding weights and biases. D DPFRL ALGORITHM Algorithm 1: DPFRL Input: Previous belief bt−1 ≈ {(hit−1, wit−1)}Ki=1, observation ot, action at xot ← Encoder(ot) (encode the raw observation) hit ∼ ftrans(hit−1, at, ut(xot )) (transition update) wit ← ηfobs(xot , hit)wit−1, η = 1/ K∑ i=1 wit (observation update) {(h′it , w′it )}Ki=1 ← Soft-Resampling({(hit, wit)}Ki=1) (soft-resampling) h̄t ← K∑ i=1 w′it h ′i t (compute the mean) for j = 1 : m do M jt ← K∑ i=1 w′it exp(v jh′it ) (compute MGF features) end p(a | bt)← π(h̄t,M1:mt ) (compute the policy) V (bt)← V (h̄t,M1:mt ) (compute the value) Output: Updated belief bt ≈ {(h′it , w′it )}Ki=1, policy p(a | bt) and value V (bt) E ADDITIONAL RESULTS E.1 FLICKEIRNG ATARI GAMES PLOTS We provide the accumulated reward curves for Atari experiments in this section. Standard Flickering Atari Games. For the standard Flickering Atari Games we provide the training curves below. Results for DVRL and GRU are directly taken from Igl et al. (2018). 0 1 2 3 4 5 Frames 1e7 20 10 0 10 20 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 2000 4000 6000 8000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 Re tu rn DPFRL DVRL GRU (a) Flickering Pong (b) Flickering ChopperCommand (c) Flickering MsPacman 0 1 2 3 4 5 Frames 1e7 2500 3000 3500 4000 4500 5000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 4000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 200 220 240 260 280 300 Re tu rn DPFRL DVRL GRU (d) Flickering Centipede (e) Flickering BeamRider (f) Flickering Frostbite 0 1 2 3 4 5 Frames 1e7 24 26 28 30 32 34 36 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 10 9 8 7 6 5 4 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0 Re tu rn DPFRL DVRL GRU (g) Flickering Bowling (h) Flickering IceHockey (i) Flickering DoubleDunk 0 1 2 3 4 5 Frames 1e7 1200 1400 1600 1800 2000 2200 Re tu rn DPFRL DVRL GRU (j) Flickering Asteroids Natural Flickering Atari Games. For Natural Flickering Atari Games we report results for a separate validation set, where the background videos are different from the training set. The validation environment steps once after every 100 training iterations. 0 1 2 3 4 5 Frames 1e7 20 10 0 10 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 750 1000 1250 1500 1750 2000 2250 Fe tu rn DPFRL DVRL GRU (a) Natural Flickeirng Pong (b) Natural Flickeirng ChopperCommand (c) Natural Flickeirng MsPacman 0 1 2 3 4 5 Frames 1e7 2250 2500 2750 3000 3250 3500 3750 4000 4250 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 300 350 400 450 500 550 600 650 700 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 75 100 125 150 175 200 225 250 Fe tu rn DPFRL DVRL GRU (d) Natural Flickeirng Centipede (e) Natural Flickeirng BeamRider (f) Natural Flickeirng Frostbite 0 1 2 3 4 5 Frames 1e7 20 22 24 26 28 30 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 13 12 11 10 9 8 7 6 5 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 20 19 18 17 16 15 14 Fe tu rn DPFRL DVRL GRU (g) Natural Flickeirng Bowling (h) Natural Flickeirng IceHockey (i) Natural Flickeirng DoubleDunk 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 1800 2000 2200 Fe tu rn DPFRL DVRL GRU (j) Natural Flickeirng Asteroids E.2 VISUAL NAVIGATION We present the reward curve for the Habitat visual navigation task below. DPFRL outperforms both GRU-based policy and DVRL given the same training time. DVRL struggles with training the observation model and fails during the first half of the training time. GRU based policy learns fast; given only the model-free belief tracker, it struggles to achieve higher reward after a certain point. We only provide the reward curve here as SPL and success rate are only evaluated after the training is finished. E.3 PARTICLE VISUALIZATION WITH PCA We further visualize the latent particles by principal component analysis (PCA) and choose the first 2 components. We choose a trajectory in the Habitat Visual Navigation experiment, where 15 particles are used. We observe that particles initially spread across the space (t = 0). As the robot only receive partial information in the visual navigation task, particles gradually form a distribution with two clusters (t = 56), which represent two major hypotheses of its current state. After more information is incorporated into the belief, they begin to converge and finally become a single cluster (t = 81). We did not observe particle depletion and posterior collapse in our experiment. This could be better avoided by adding an entropy loss to the learned variance of ftrans and we will leave it for future study. 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (a) t = 0 (b) t = 22 (c) t = 30 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (d) t = 56 (e) t = 74 (f) t = 81
1. What is the focus and contribution of the paper on POMDP RL? 2. What are the strengths of the proposed approach, particularly in combining Bayesian filtering and policy-oriented discriminative modeling? 3. What are the weaknesses of the paper, especially in terms of experimental results and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the paper's methodology or results that the reviewer would like further clarification on?
Review
Review This is a well written paper. It introduces a principled method for POMDP RL: Discriminative Particle Filter Reinforcement Learning (DPFRL). It combines the strength of Bayesian filtering and policy-oriented discriminative modeling. DPFRL encodes a differentiable particle filter with learned transition & observation models in a neural network, allowing for reasoning with partial observations over multiple time steps. It performs explicit belief tracking with discriminative learnable particle filters optimized directly for the RL policy. Experimental results show that DPFRL achieves state-of-the-art on POMDP RL benchmarks. I especially like the paper covers a diverse set of applications, including Mountain Hike, the classic Atari games, and visual navigation (Habitat). Improved performance is reported. Results show that the particle filter structure is effective for handling partial observations, and the discriminative parameterization allows for complex observations.
ICLR
Title Discriminative Particle Filter Reinforcement Learning for Complex Partial observations Abstract Deep reinforcement learning is successful in decision making for sophisticated games, such as Atari, Go, etc. However, real-world decision making often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for complex partial observations. DPFRL encodes a differentiable particle filter in the neural network policy for explicit reasoning with partial observations over time. The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making. We show that using the discriminative update instead of standard generative models results in significantly improved performance, especially for tasks with complex visual observations, because they circumvent the difficulty of modeling complex observations that are irrelevant to decision making. In addition, to extract features from the particle belief, we propose a new type of belief feature based on the moment generating function. DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark introduced in this paper. Further, DPFRL performs well for visual navigation with real-world data in the Habitat environment. The code is available online 1. 1 INTRODUCTION Deep Reinforcement Learning (DRL) has attracted significant interest, with applications ranging from game playing (Mnih et al., 2013; Silver et al., 2017) to robot control and visual navigation (Levine et al., 2016; Kahn et al., 2018; Savva et al., 2019). However, natural real-world environments remain challenging for current DRL methods (Arulkumaran et al., 2017), in part because they require (i) reasoning in a partially observable environment and (ii) reasoning with complex observations, such as visually rich images. Consider, for example, a robot, navigating in an indoor environment, with a camera for visual perception. To determine its own location and a traversable path, the robot must extract from image pixels relevant geometric features, which often coexist with irrelevant visual features, such as wall textures, shadows, etc. Further, the task is partially observable: a single image at the current time does not provide sufficient features for localization, and the robot must integrate information from the history of visual inputs received. The partially observable Markov decision process (POMDP) provides a principled general framework for decision making under partial observability. Solving POMDPs requires tracking a sufficient statistic of the action-observation history, e.g., the posterior distribution of the states, called the belief. Most POMDP reinforcement learning (RL) methods summarize the history into a vector using a recurrent neural network (RNN) (Hausknecht & Stone, 2015; Zhu et al., 2018). RNNs are model-free generic function approximators. Without appropriate structural priors, they need large amounts of training data to learn to track a complex belief well. Model-based DRL methods aim to reduce the sample complexity by learning a model together with a policy. In particular, to deal with partial observability, Igl et al. (2018) recently proposed DVRL, which learns a generative observation model embedded into the policy through a Bayes filter. Since 1https://github.com/Yusufma03/DPFRL the Bayes filter tracks the belief explicitly, DVRL performs much better than generic RNNs under partial observability. However, a Bayes filter normally assumes a generative observation model, that defines the probability p(o | ht) of receiving an observation o = ot given the latent state ht (Fig. 1b). Learning this model can be very challenging since it requires modeling all observation features, including features irrelevant for RL. When o is an image, p(o | ht) is a distribution over all possible images. This means, e.g., to navigate in a previously unseen environment, we need to learn the distribution of all possible environments with their visual appearance, lighting condition, etc. — a much harder task than learning to extract features relevant to navigation, e.g., the traversable space. We introduce the Discriminative Particle Filter Reinforcement Learning (DPFRL), a POMDP RL method that learns to explicitly track a belief over the latent state without a generative observation model, and make decisions based on features of the belief (Fig. 1a). DPFRL approximates the belief by a set of weighted learnable latent particles {(hit, wit)}Ki=1, and it tracks this particle belief by a nonparametric Bayes filter algorithm, an importance weighted particle filter, encoded as a differentiable computational graph in the neural network architecture. The importance weighted particle filter applies discriminative update to the belief with an observation-conditioned transition model and a discriminative state-observation compatibility function (serving as the importance weights), both of which are learnable neural networks trained end-to-end. By using these update functions instead of the transition and observation models of the standard particle filter, DPFRL sidesteps the difficulty of learning a generative observation model (Fig. 1b). The model is discriminative in the sense that the compatibility function, fobs(ot, ht), as shown in Fig. 1c, while playing an analogue role as p(ot | ht), is not required to directly represent a normalized distribution over observations; and through end-toend training it only needs to model observation features relevant for the RL task. Finally, to summarize the particle belief for the policy, we introduce novel learnable features based on Moment-Generating Functions (MGFs) (Bulmer, 1979). MGF features are computationally efficient and permutation invariant, and they can be directly optimized to provide useful higher-order moment information for learning a policy. MGF features could be also used as learned features of any empirical distribution in applications beyond RL. We evaluate DPFRL on a range of POMDP RL domains: a continuous control task from Igl et al. (2018), Flickering Atari Games (Hausknecht & Stone, 2015), Natural Flickering Atari Games, a new domain with more complex observations that we introduce, and the Habitat visual navigation domain using real-world data (Savva et al., 2019). DPFRL outperforms state-of-the-art POMDP RL methods in most cases. Results show that belief tracking with a particle filter is effective for handling partial observability, and the discriminative update and MGF-based belief features allow for complex observations. 2 RELATED WORK Real-world decision-making problems are often formulated as POMDPs. POMDPs are notoriously hard to solve; in the worst case, they are computationally intractable (Papadimitriou & Tsitsiklis, 1987). Approximate POMDP solvers have made dramatic progress in solving large-scale POMDPs (Kurniawati et al., 2008). Particle filters have been widely adopted as a belief tracker for POMDP solvers (Silver & Veness, 2010; Somani et al., 2013) having the flexibility to model complex and multi-modal distributions, unlike Gaussian and Kalman filters. However, predefined model and state representations are required for these methods (see e.g. Bai et al. (2015)). Given the advances in generative neural network models, various neural models have been proposed for belief tracking (Chung et al., 2015; Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018). DVRL (Igl et al., 2018) uses a Variational Sequential Monte-Carlo method (Naesseth et al., 2018), similar to the particle filter we use, for belief tracking in RL. This gives better belief tracking capabilities, but as we demonstrate in our experiments, generative modeling is not robust in complex observation spaces with high-dimensional irrelevant observation. More powerful generative models, e.g., DRAW (Gregor et al., 2015), could be considered to improve generative observation modeling; however, evaluating a complex generative model for each particle would significantly increase the computational cost and optimization difficulty. Learning a robust latent representation and avoiding reconstructing observations are of great interest for RL (Oord et al., 2018; Guo et al., 2018; Hung et al., 2018; Gregor et al., 2019; Gelada et al., 2019). Discriminative RNNs have also been widely used for belief approximation in partially observable domains (Bakker, 2002; Wierstra et al., 2007; Foerster et al., 2016). The latent representation is directly optimized for the policy p(a|ht) that skips observation modeling. For example, Hausknecht & Stone (2015) and Zhu et al. (2018) tackle partially observable Flickering Atari Games by extending DQN (Mnih et al., 2013) with an LSTM memory. Our experiments demonstrate that the additional structure for belief tracking provided by a particle filter can give improved performance in RL. Embedding algorithms into neural networks to allow end-to-end discriminative training has gained attention recently. For belief tracking, the idea has been used in the differentiable histogram filter (Jonschkowski & Brock, 2016), Kalman filter (Haarnoja et al., 2016) and particle filter (Karkus et al., 2018; Jonschkowski et al., 2018). Further, Karkus et al. (2017) combined a learnable histogram filter with the Value Iteration Network (Tamar et al., 2016) and introduced a learnable POMDP planner, QMDP-net. However, these methods require a predefined state representation and are limited to relatively small state spaces. Ma et al. (2019) integrated the particle filter with standard RNNs, e.g., the LSTM, and introduced PF-RNNs for sequence prediction. We build on the work of Ma et al. (2019) and demonstrate its advantages for RL with complex partial observations, and extend it with MGF features for improved decision making from particle beliefs. Note that our framework is not specific to PF-RNNs, and could be applied to other differentiable particle filters as well. 3 DISCRIMINATIVE PARTICLE FILTER REINFORCEMENT LEARNING We introduce DPFRL for reinforcement learning under partial and complex observations. The DPFRL architecture is shown in Fig. 2. It has two main components, a discriminatively trained particle filter that tracks a latent belief bt, and an actor network that learns a policy p(a | bt) given the belief bt. 3.1 PARTICLE FILTER FOR LATENT BELIEF TRACKING Latent State Representation. In POMDPs the semantics of states s is typically defined explicitly. State variables may correspond to the position of a robot, configuration of obstacles, etc. In DPFRL, we do not require explicit specification of the state variables, but implicitly represent the state as a vector h of latent variables, that is, the semantics of the state variables are learned instead of being pre-specified. We use a fully differentiable particle filter algorithm to maintain a belief over h. More specifically, we approximate the belief with a set of weighted latent particles bt ≈ {(hit, wit)}Ki=1, where {hit}Ki=1 are K latent states learned by policy-oriented training, and {wit}Ki=1 represents the corresponding weights. Each latent state hit stands for a hypothesis in the belief; the set of latent particles provide an approximate representation for the belief. Belief Update. In a basic particle filter, there are two key steps to update a particle belief {hit−1, wit−1}Ki=1 to a new particle belief {hit, wit}Ki=1 upon receiving an observation ot after executing action at. hit ∼ p(h | hit−1, at), (1) wit = ηp(ot | hit)wit−1, η = 1/ΣKi=1p(ot | hit)wit−1 (2) The first step, Eq. 1, takes the transition dynamics into account to update each particle. The second step, Eq. 2, takes the observation into account to reweigh the new particles. Our belief update has a similar structure as the standard particle filter, but we replace the transition model and the observation model with richer functions to make the update more suitable for learning a policy in a partially observable domain. Specifically, the update equations are as follows. hit ∼ ftrans(hit−1, at, ot), (3) wit = ηfobs(h i t, ot)w i t−1, η = 1/Σ K i=1fobs(h i t, ot)w i t−1, (4) {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1) (5) Below, we first explain the intuition behind the above updates and the roles of ftrans and fobs as compared to the standard transition and observation models. We then derive that above rules from an importance weighed particle filter in Sect. 3.2. Observation-conditioned transition update. Eq. 3 takes a form more general than that in Eq. 2: instead of using the transition dynamics p(h | hit−1, at) to evolve a particle, we use a more general observation-conditioned transition ftrans(h | hit−1, at, ot). Incorporating the observation allows alleviating the problem of sampling unlikely particles. In fact, if we take ftrans to be p(h | hit−1, at, ot), then this allows us to skip Eq. 2, and completely avoids sampling particles that are likely considering at only, but unlikely considering both at and ot. Of course, in RL we do not have access to p(h | hit−1, at, ot), and instead ftrans is learned. In our implementation, a network first extracts features from ot, they are fed to a gated function following the PF-GRU of Ma et al. (2019), which outputs the mean and variance of a normal distribution. Details are in the Appendix. Importance weighting via a compatibility function. Eq. 4 is a relaxed version of Eq. 2: instead of using the observation model p(ot | hit) to adjust the particle weights based on their compatibility with the observation, we use a general non-negative compatibility function fobs(hit, ot). If the compatibility function is required to satisfy the normalization constraint that ∑ o fobs(h, o) is a constant for all h, then it is equivalent to a conditional distribution of o given h. We do not require this, and thus the update loses the probabilistic interpretation in Eq. 2. However, eliminating the need for the normalization constraint allows the compatibility function to be efficiently trained, as we can avoid computing the normalization constant. In addition, since the observation has already been incorporated in Eq. 3, we actually expect that the weights need to be adjusted in a way different from the standard particle filter. In our implementation, fobs(hit, ot) is a neural network with a single fully connected layer that takes in a concatenation of hit and features extracted from ot. The output of the network is interpreted as the log of fobs; and for numerical stability we perform the weight updates of Eq. 4 in the log-space as well. Note that more complex network architectures could improve the capability of fobs, which we leave to future work. Soft-resampling. To avoid particle degeneracy, i.e., most of the particles having a near-zero weight, particle filters typically resample particles. We adopt the soft-resampling strategy of Karkus et al. (2018); Ma et al. (2019), that provides approximate gradients for the non-differentiable resampling step. Instead of sampling from pt(i) = wit, we sample particles {h′it }Ki=1 from a softened proposal distribution q(i) = αwit + (1 − α)1/K, where α is an trade-off parameter. The new weights are derived using importance sampling: w′it = wit αwit+(1−α)1/K . We can have the final particle belief as {(h′it , w′it )}Ki=1 = Soft-Resampling({(hit, wit)}Ki=1). As a result, fobs can be optimized with global belief information and model shared useful features across multiple time steps. Another related concern is that the particle distribution may collapse to particles with the same latent state. This can be avoided by ensuring that the stochastic transition function ftrans has a non-zero variance, e.g., by adding a small constant to the learned variance. End-to-end training. In DPFRL the observation-conditioned transition function ftrans and the compatibility function fobs are learned. Instead of training for a modeling objective, they are trained end-to-end for the final RL objective, backpropagating gradients through the belief-conditional policy p(a | bt) and the update steps of the particle filter algorithm, Eq. 3-5. 3.2 CONNECTION TO IMPORTANCE WEIGHTED PARTICLE FILTER Our belief update can be motivated from the following importance weighted particle filter. Learning directly p(h′ | h, a, o) is generally difficult, but if we have a distribution q(h′ | h, a, o) that is easy to learn, then we can use importance sampling to update a particle belief. hit ∼ q(hit−1, at, ot), (6) wit = ηf(h i t, h i t−1, at, ot)w i t−1, η = 1/Σ K i=1f(h i t, h i t−1, at, ot)w i t−1 (7) where f = p/q is the importance weight. Consider the case that q(h′ | h, a, o) is the conditional distribution of a joint distribution q(h′, h, a, o) of the form p(h′ | h, a)q(o | h′). That is, p and q share the same transition dynamics p(h′ | h, a). Then the importance weight f is a function of h′ and o only, because f(h′, h, a, o) = p(h′ | h, a, o) q(h′ | h, a, o) = p(h′ | h, a)p(o | h′) p(h′ | h, a)q(o | h′) = p(o | h′) q(o | h′) . This simpler form is exactly the form that we used for fobs in our belief update. 3.3 DISCRIMINATIVE VS. GENERATIVE MODELING We expect the discriminative compatibility function to be more effective than a generative model for the following reasons. A generative model aims to approximate p(o | h) by learning a function that takes h as input and outputs a parameterized distribution over o. When o is, e.g., an image, this requires approximations, e.g., using pixel-wise Gaussians with learned mean and variance. This model is also agnostic to the RL task and considers all observation features equally, including features irrelevant for filtering and decision making. In contrast, fobs takes o and h as inputs, and estimates the compatibility of o and h for particle filtering directly. This function avoids forming a parametric distribution over o, and the function can be easier to learn. The same functional form is used for the energy-function of energy-based models (LeCun et al., 2006) and in contrastive predictive coding (Oord et al., 2018), with similar benefits. For example, fobs may learn unnormalized likelihoods that are only proportionate to p(o | h) up to a o-dependent value, because after the normalization in Eq. 4, they would give the same belief update as the normalized p(o | h). Further, because fobs is trained for the final RL objective instead of a modeling objective, it may learn a compatibility function that is useful for decision making, but that does not model all observation features and has no proper probabilistic interpretation. While the task-oriented training of discriminative models may improve policy performance for the reasons above, it cannot take advantage of an auxiliary learning signal like the reconstruction objective of a generative model. An interesting line of future work may combine generative models with a compatibility function to simultaneously benefit from both formulations. 3.4 BELIEF-CONDITIONAL ACTOR NETWORK Conditioning a policy directly onto a particle belief is non-trivial. To feed it to the networks, we need to summarize it into a single vector. We introduce a novel feature extraction method for empirical distributions based on MomentGenerating Functions (MGFs). The MGF of an n-dimensional random variable X is given by MX(v) = E[ev >X],v ∈ Rn. In statistics, MGF is an alternative specification of its probability distribution (Bulmer, 1979). Since particle belief bt is an empirical distribution, the moment generating function of bt can be denoted as Mbt(v) = ∑K i=1 w i te v>hit . A more detailed background on MGFs is in Appendix A.2. In DPFRL, we use the values of the MGF at m learned locations v1:m as the feature vector of the MGF. The j-th MGF feature is given by M jbt(v j). For a clean notation, we use M jt in place of M jbt(v j). We use [ h̄t,M 1:m t ] as features for belief bt, where h̄t = ∑K i=1 w i th i t is the mean particle. The mean particle h̄t, as the first-order moment, and m additional MGF features, give a summary of the belief characteristics. The number of MGF features, m, controls how much additional information we extract from the belief. We empirically study the influence of MGF features in ablation studies. Compared to Ma et al. (2019) that uses the mean as the belief estimate, MGF features provide additional features from the empirical distribution. Compared to DVRL (Igl et al., 2018) that treats the Monte-Carlo samples as a sequence and merges them by an RNN, MGF features are permutationinvariant, computationally efficient and easy to optimize, especially when the particle set is large. Given the features [ h̄t,M 1:m t ] for bt, we compute the policy p(a | bt) with a policy network π(bt). We trained with an actor-critic RL algorithm, A2C (Mnih et al., 2016), where a value network V (bt) is introduced to assist learning. We use small fully-connected networks for π(bt) and V (bt) that share the same input bt. 4 EXPERIMENTS We evaluate DPFRL in a range of POMDP RL domains with increasing belief tracking and observation modeling complexity. We first use benchmark domains from the literature, Mountain Hike, and 10 different Flickering Atari Games. We then introduce a new, more challenging domain, Natural Flickering Atari Games, that uses a random video stream as the background. Finally we apply DPFRL to a challenging visual navigation domain with RGB-D observations rendered from real-world data. We compare DPFRL with a GRU network, a state-of-the-art POMDP RL method, DVRL, and ablations of the DPFRL architecture. As a brief conclusion, we show that: 1) DPFRL significantly outperforms GRU in most cases because of its explicit structure for belief tracking; 2) DPFRL outperforms the state-of-the-art DVRL in most cases even with simple observations, and its benefit increases dramatically with more complex observations because of DPFRL’s discriminative update; 3) MGF features are more effective for summarizing the latent particle belief than alternatives. 4.1 EXPERIMENTAL SETUP We train DPFRL and baselines with the same A2C algorithm, and use a similar network architecture and hyperparameters as the original DVRL implementation. DPFRL and DVRL differ in the particle belief update structure, but they use the same latent particle size dim(h) and the same number of particles K as in the DVRL paper (dim(h) = 128 and K = 30 for Mountain Hike, dim(h) = 256 and K = 15 for Atari games and visual navigation). The effect of the number of particles is discussed in Sect. 4.5. We train all models for the same number of iterations using the RMSProp optimizer (Tieleman & Hinton, 2012). Learning rates and gradient clipping values are chosen based on a search in the BeamRider Atari game independently for each model. Further details are in the Appendix. We have not performed additional searches for the network architecture and other hyper-parameters, nor tried other RL algorithm, such as PPO (Schulman et al., 2017), which may all improve our results. All reported results are averages over 3 different random seeds. We plot rewards accumulated in an episode, same as DVRL (Igl et al., 2018). The curves are smoothed over time and averaged over parallel environment executions. 4.2 MOUNTAIN HIKE Mountain Hike was introduced by Igl et al. (2018) to demonstrate the benefit of belief tracking for POMDP RL. It is a continuous control problem where an agent navigates on a fixed 20× 20 map. In the original task, partial observability is introduced by disturbing the agent observation with an additive Gaussian noise. To illustrate the effect of observation complexity in natural environments, we concatenate the original observation vector with a random noise vector. The complexity of the optimal policy remains unchanged, but the relevant information is now coupled with irrelevant observation features. More specifically, the state space and action space in Mountain Hike are defined as S = A = R2, where st = [xt, yt] and at = [δxt, δyt]. Transitions of the agent are stochastic with an additive Gaussian noise: st+1 = st + at + a, where a ∼ N (0, 0.25). The observation space is O = R2+l, where l is a predefined constant and l = 0 corresponds to the original setting. Observations are ot = [ost , o n t ], where o s t = st + s, s ∼ N (0, 1), and ont ∈ Rl is sampled from a uniform distribution U(−10, 10). The reward for each step is given by rt = r(xt, yt) − 0.01||at|| where r(xt, yt) is shown in Fig. 3. Episodes end after 75 steps. We train models for different settings of the noise vector length l, from l = 0 to l = 100. Results are shown in Fig. 4. We observe that DPFRL learns faster than the DVRL and GRU in all cases, including the original setting l = 0. Importantly, as the noise vector length increases, the performance of DVRL and GRU degrades, while DPFRL is unaffected. This demonstrates the ability of DPFRL to track a latent belief without having to explicitly model complex observations. 4.3 ATARI GAMES WITH PARTIAL OBSERVABILITY Atari games are one of the most popular benchmark domains for RL methods (Mnih et al., 2013). Their partially observable variants, Flickering Atari Games, have been used to benchmark POMDP RL methods (Hausknecht & Stone, 2015; Zhu et al., 2018; Igl et al., 2018). Here image observations are single frames randomly replaced by a blank frame with a probability of 0.5. The flickering observations introduce a simple form of partial observability. Another variant, Natural Atari Games (Zhang et al., 2018), replaces the simple black background of the frames of an Atari game with a randomly sampled video stream. This modification brings the Atari domain one step closer to the visually rich real-world, in that the relevant information is now encoded in complex observations. As shown by Zhang et al. (2018), this poses a significant challenge for RL. We propose a new RL domain, Natural Flickering Atari Games, that involves both challenges: partial observability simulated by flickering frames, and complex observations simulated by random background videos. The background videos increase observation complexity without affecting the decision making complexity, making this a suitable domain for evaluating RL methods with complex observations. We sample the background video from the ILSVRC dataset (Russakovsky et al., 2015). Examples for the BeamRider game are shown in Fig. 5. Details are in Appendix B. We evaluate DPFRL for both Flickering Atari Games and Natural Flickering Atari Games. We use the same set of games as Igl et al. (2018). To ensure a fair comparison, we take the GRU and DVRL results from the paper for Flickering Atari Games, use the same training iterations as in Igl et al. (2018), and we use the official DVRL open source code to train for Natural Flickering Atari Games. Results are summarized in Table 1. We highlight the best performance in bold where the difference is statistically significant (p = 0.05). Detailed training curves are in Appendix E. We observe that DPFRL significantly outperforms GRU in almost all games, which indicates the importance of explicit belief tracking, and shows that DPFRL can learn a useful latent belief representation. Despite the simpler observations, DPFRL significantly outperforms DVRL and achieves state-of-the-art results on 5 out of 10 standard Flickering Atari Games (ChopperCommand, MsPacman, BeamRider, Bowling, Asteroids), and it performs comparably in 3 other games (Centipede, Frostbite, IceHockey). The strength of DFPRL shows even more clearly in the Natural Flickering Atari Games, where it significantly outperforms DVRL on 7 out of 10 games and performs similarly in the rest. In some games, e.g. in Pong, DPFRL performs similarly with and without videos in the background (15.65 vs. 15.40), while the DVRL performance degrades substantially (-19.78 vs. 18.17). These results show that while the architecture of DPFRL and DVRL are similar, the policy-oriented discriminative update of DPFRL is much more effective for handling complex observations, and the MGF features provide a more powerful summary of the particle belief for decision making. However, on some games, e.g. on ChopperCommand, even DPFRL performance drops significantly when adding background videos. This shows that irrelevant features can make a task much harder, even for a discriminative approach, as also observed by Zhang et al. (2018). 4.4 VISUAL NAVIGATION Figure 6: RGB-D Habitat Observations Table 2: Visual Navigation Results SPL Success Rate Reward DPFRL 0.79 0.88 12.82±5.82 DVRL 0.09 0.11 5.22±2.24 GRU 0.63 0.74 10.14±2.82 PPO(Savva et al., 2019) 0.70 0.80 — Visual navigation poses a great challenge for deep RL (Mirowski et al., 2016; Zhu et al., 2017; Lample & Chaplot, 2017). We evaluate DPFRL for visual navigation in the Habitat Environment (Savva et al., 2019), using the real-world Gibson dataset (Xia et al., 2018). In this domain, a robot needs to navigate to goals in previously unseen environments. In each time step, it receives a first-person RGB-D camera image and its distance and relative orientation to the goal. The main challenge lies in the partial and complex observations: first-person view images only provide partial information about the unknown environment; and the relevant information for navigation, traversability, is encoded in rich RGB-D observations along with many irrelevant features, e.g., the texture of the wall. We use the Gibson dataset with the training and validation split provided by the Habitat challenge. We train models with the same architecture as for the Atari games, except for the observation encoder that accounts for the different observation format. We evaluate models in unseen environments from the validation split and compute the same metrics as in the literature: SPL, success rate, and average rewards. Results are shown in Table 2. Further details and results are in Appendix B and E. DPFRL significantly outperforms both DVRL and GRU in this challenging domain. DVRL performs especially poorly, demonstrating the difficulty of learning a generative observation model in realistic, visually rich domains. DPFRL also outperforms the PPO baseline from Savva et al. (2019). We note that submissions to the recently organized Habitat Challenge 2019 (Savva et al., 2019), such as (Chaplot et al., 2019), have demonstrated better performance than the PPO baseline (while our results are not directly comparable because of the closed test set of the competition). However, these approaches rely on highly specialized structures, such as 2D mapping and 2D path planning, while we use the same generic network as for Atari games. Future work may further improve our results by adding a task-specific structure to DPFRL or training with PPO instead of A2C. 4.5 ABLATION STUDY We conduct an extensive ablation study on the Natural Flickering Atari Games to understand the influence of each DPFRL component. The results are presented in Table 3. The discriminative compatibility function is more effective than a generative observation function. DPFRL-generative replaces the discriminative compatibility function of DPFRL with a generative observation function, where grayscale image observations are modeled by pixel-wise Gaussian distributions with learned mean and variance. Unlike DVRL, DPFRL-generative only differs from DPFRL in the parameterization of the observation function, the rest of the architecture and training loss remains the same. In most cases, the performance for DPFRL-generative degrades significantly compared to DPFRL. These results are aligned with our earlier observations, and indicate that the compatibility function is capable of extracting the relevant information from complex observations without having to learn a more complex generative model. More particles perform better. DPFRL with 1 particle performs poorly on most of the tasks (DPFRLLP1). This indicates that a single latent state is insufficient to represent a complex latent distribution that is required for the task, and that more particles may improve performance. MGF features are useful. We compare DPFRL using MGF features with DPFRL-mean, that only uses the mean particle; and with DPFRL-GRUmerge, that uses a separate RNN to summarize the belief, similar to DVRL. Results show that DPFRL-mean does not work as well as the standard DPFRL, especially for tasks that may need complex belief tracking, e.g., Pong. This can be attributed to the more rich belief statistics provided by MGF features, and that they do not constrain the learned belief representation to be always meaningful when averaged. Comparing to DPFRL-GRUmerge shows that MGF features generally perform better. While an RNN may learn to extract useful features from the latent belief, optimizing the RNN parameters is harder, because they are not permutation invariant to the set of particles and they result in a long backpropagation chain. 5 CONCLUSION We have introduced DPFRL, a framework for POMDP RL in natural environments. DPFRL combines the strength of Bayesian filtering and end-to-end RL: it performs explicit belief tracking with learnable particle filters optimized directly for the RL policy. DPFRL achieved state-of-the-art results on POMDP RL benchmarks from prior work, Mountain Hike and a number of Flickering Atari Games. Further, it significantly outperformed alternative methods in a new, more challenging domain, Natural Flickering Atari Games, as well as for visual navigation using real-world data. We have proposed a novel MGF feature for extracting statistics from an empirical distribution. MGF feature extraction could be applied beyond RL, e.g., for general sequence prediction. DPFRL does not perform well in some particular cases, e.g., DoubleDunk. While our task-oriented discriminative update are less susceptible to complex and noisy observations than a generative model, they do not benefit from an additional learning signal that could improve sample efficiency, e.g., through a reconstruction loss. Future work may combine a generative observation model with the discriminative update in the DPFRL framework. 6 ACKNOWLEDGEMENT This research is partially supported by ONR Global and AFRL grant N62909-18-1-2023. We want to thank Maximilian Igl for suggesting to add videos to the background of Atari games. A BACKGROUND A.1 PARTICLE FILTER ALGORITHM Particle filter is an approximate Bayes filter algorithm for belief tracking. Bayes filters estimate the belief bt, i.e., a posterior distribution of the state st, given the history of actions a1:t and observations o1:t. Instead of explicitly modeling the posterior distribution, particle filter approximates the posterior with a set of weighted particles, bt ≈ {(sit, wit)}Ki=1, and update the particles in a Bayesian manner. Importantly, the particle set could approximate arbitrary distributions, e.g., Gaussians, continuous multi-modal distributions, etc. The mean state can be estimated as the mean particle s̄t = ∑K i=1 w i ts i t. The particle updates include three steps: transition update, measurement update, and resampling. Transition update. We first update the particles by a given motion model. More specifically, we sample the next state sit+1 from a generative transition function sit+1 ∼ p(s | sit, at) (8) where p(s | sit, at) is the transition function. Measurement update. The particle weights are updated again using the observation likelihoods wit+1 = ηp(ot | sit+1)wit, η = 1/ K∑ i=1 wit+1 (9) where η is a normalization factor and p(ot | sit+1) is the observation likelihood computed by evaluating observation ot in a generative observation function p(o | sit+1). Resampling. The particle filter algorithm can suffer from particle degeneracy, where after some update steps only a few particles have non-zero weights. This would prevent particle filter to approximate the posterior distribution effectively. Particle degeneracy is typically addressed by performing resampling, where new particles are sampled with repetition proportional to its weight. Specifically, we sample particles from a categorical distribution p parameterized by the particle weights {wit}Ki=1 p(i) = wit (10) where p(i) is the probability for the i-th category, i.e., the i-th particle. The new particles approximate the same distribution, but they assign more representation capacity to the relevant regions of the state space. A.2 MOMENT-GENERATING FUNCTIONS In probability theory, the moment-generating function (MGF) is an alternative specification of the probability distribution of a real-valued random variable (Bulmer, 1979). As its name suggests, MGF of a random variable could be used to generate any order of its moments, which characterize its probability distribution. Mathematically, the MGF of a random variable X with dimension m is defined by MX(v) = E [ ev >X ] (11) where v ∈ Rm and we could consider the MGF of random variable X is the expectation of the random variable ev >X. Consider the series expansion of ev >X ev >X = 1 + v>X + (v>X)2 2! + . . .+ (v>X)n n! + . . . (12) This leads to the well-known fact that the j-th order moment Mj (j-way tensor) is the j-th order derivative of the MGF at v = 0. Mj = djMX dvj |v=0 (13) In DPFRL, we use MGFs as additional features to provide moment information of the particle distribution. DPFRL learns to extract useful moment features for decision making by directly optimizing for policy p(a | bt). B EXPERIMENT DETAILS B.1 IMPLEMENTATION DETAILS Observation Encoders: For the observation encoders, we used the same structure with DVRL (Igl et al., 2018) for a fair comparison. For Mountain Hike, we use two fully connected layers with batch normalization and ReLU activation as the encoder. The dimension for both layers is 64. For the rest of the domains, we first down-sample the image size to 84×84, then we process images with 3 2D-convolution layers with channel number (32, 64, 32), kernel sizes (8, 4, 3) and stride (4, 2, 1), without padding. The compass and goal information are a vector of length 2; they are appended after the image encoding as the input. Observation Decoders: Both DVRL and PFGRU-generative need observation decoders. For the Mountain Hike, we use the same structure as the encoder with a reversed order. The transposed 2D-convolutional network of the decoder has a reversed structure. The decoder is processed by an additional fully connected layer which outputs the required dimension (1568 for Atari and Habitat Navigation, both of which have 84 × 84 observations). Observation-conditioned transition network: We directly use the transition function in PFGRU (Ma et al., 2019) for ftrans(hit−1, at, ot), which is a stochastic function with GRU gated structure. Action at is first encoded by a fully connected layer with batch normalization and ReLU activation. The encoding dimension for Mountain Hike is 64 and 128 for all the rest tasks. The mean and variance of the normal distribution are learned again by two additional fully connected layers; for the variance, we use Softplus as the activation function. State-observation compatibility network: fobs is implemented by a single fully connected layer without activation. In DVRL, the observation function is parameterized over the full observation space o and p(o | hit−1, ait) is assumed as a multivariate independent Bernoulli distribution whose parameters are again determined by a neural network (Igl et al., 2018). For numerical stability, all the probabilities are stored and computed in the log space and the particle weights are always normalized after each weight update. Soft-resampling: The soft-resampling hyperparameter α is set to be 0.9 for Mountain Hike and 0.5 for the rest of the domains. Note that the soft-resampling is used only for DPFRL, not including DVRL. DVRL averages the particle weights to 1/K after each resampling step, which makes the resampling step cannot be trained by the RL. Belief Summary: The GRU used in DVRL and DPFRL-GRUmerge is a single layer GRU with input dimension equals the dimension of the latent vector plus 1, which is the corresponding particle weight. The dimension of this GRU is exactly the dimension of the latent vector. For the MGF features, we use fully connected layers with feature dimensions as the number of MGF features. The activation function used is the exponential function. We could potentially explore the other activation functions to test the generalized-MGF features, e.g., ReLU. Actor Network and Policy Network: The actor network and policy network are two fully connected layers, which take in the belief summary bt = [ h̄t,M 1:m t ] as input. The output dimension of these two networks are chosen according to the RL tasks. Model Learning: For RL, we use an A2C algorithm with 16 parallel environments for both Mountain Hike and Atari games; for Habitat Navigation, we only use 6 parallel environments due to the GPU memory constraints. The loss function for DPFRL and GRU-based policy is just the standard A2C loss, LA2Ct = LAt + λV LVt + λHLHt , where LAt is the policy loss, LVt is the value loss, LHt is the entropy loss for encouraging exploration, and λV and λH are two hyperparameters. For all experiments, we use λV = 0.5 and λH = 0.01. For DVRL, an additional encoding loss LEt is used to train the sequential VAE, which gives a loss function LDVRLt = L A2C t + λ ELEt . We follow the default setting provided by Igl et al. (2018) and set λE = 0.1. The rest of the hyperparameters, including learning rate, gradient clipping value and α in soft-resampling are tuned according to the BeamRider and directly applied to all domains due to the highly expensive experiment setups. The learning rate for all the networks are searched among the following values: (3× 10−5, 5× 10−5, 1× 10−4, 2× 10−4, 3× 10−4); the gradient clipping value are searched among {0.5, 1.0}; the soft-resampling α is searched among {0.5, 0.9}. The best performing learning rates were 1−4 for DPFRL and GRU, and 2−4 for DVRL; the gradient clipping value for all models was 0.5; the soft-resampling α is set to be 0.9 for Mountain Hike and 0.5 for Atari games. B.2 EXPERIMENTAL SETUP Natural Flickering Atari games We follow the setting of the prior works (Zhu et al., 2018; Igl et al., 2018): 1) 50% of the frames are randomly dropped 2) a frameskip of 4 is used 3) there is a 0.25 chance of repeating an action twice. In our experiments, we sample background videos from the ILSVRC dataset (Russakovsky et al., 2015). Only the videos with the length longer than 500 frames are sampled to make sure the video length is long enough to introduce variability. For each new episode, we first sample a new video from the dataset, and a random starting pointer is sampled in this video. Once the video finishes, the pointer is reset to the first frame (not the starting pointer we sampled) and continues from there. Experiment platform: We implement all the models using PyTorch (Paszke et al., 2017) with CUDA 9.2 and CuDNN 7.1.2. Flickering Atari environments are modified based on OpenAI Gym (Brockman et al., 2016) and we directly use Habitat APIs for visual navigation. Collecting experience and performing gradient updates are done on a single computation node with access to one GPU. For Mountain Hike and Atari games we use NVidia GTX1080Ti GPUs. For Habitat visual navigation we use NVidia RTX2080Ti GPUs. C PF-GRU NETWORK ARCHITECTURE We implement DPFRL with gated transition and observation functions for particle filtering similar to PF-GRU (Ma et al., 2019). In standard GRU, the memory update is implemented by a gated function: ht = (1− zt) ◦ tanh(nt) + zt ◦ ht−1, nt = Wn[rt ◦ ht−1, xt] + bn (14) where Wn and bn are the corresponding weights and biases, and zt rt are the learned gates. PF-GRU introduces stochastic cell update by assuming the update to the memory, nit, follows a parameterized Gaussian distribution nit = Wn[r i t ◦ hit−1, xt] + bn + it, it ∼ N (0,Σit), Σit = WΣ[hit−1, xt] + bΣ (15) With xt = [foenc(ot), f a enc(at)], we implement the transition function h i t+1 ∼ ftrans(hit, ot, at), where foenc is the encoding network for observation and f a enc is the encoding network for the actions. For the observation function, we directly use a fully connected layer fobs(hit, ot) = Wo [ hit, ot ] + bo, where Wo and bo are the corresponding weights and biases. D DPFRL ALGORITHM Algorithm 1: DPFRL Input: Previous belief bt−1 ≈ {(hit−1, wit−1)}Ki=1, observation ot, action at xot ← Encoder(ot) (encode the raw observation) hit ∼ ftrans(hit−1, at, ut(xot )) (transition update) wit ← ηfobs(xot , hit)wit−1, η = 1/ K∑ i=1 wit (observation update) {(h′it , w′it )}Ki=1 ← Soft-Resampling({(hit, wit)}Ki=1) (soft-resampling) h̄t ← K∑ i=1 w′it h ′i t (compute the mean) for j = 1 : m do M jt ← K∑ i=1 w′it exp(v jh′it ) (compute MGF features) end p(a | bt)← π(h̄t,M1:mt ) (compute the policy) V (bt)← V (h̄t,M1:mt ) (compute the value) Output: Updated belief bt ≈ {(h′it , w′it )}Ki=1, policy p(a | bt) and value V (bt) E ADDITIONAL RESULTS E.1 FLICKEIRNG ATARI GAMES PLOTS We provide the accumulated reward curves for Atari experiments in this section. Standard Flickering Atari Games. For the standard Flickering Atari Games we provide the training curves below. Results for DVRL and GRU are directly taken from Igl et al. (2018). 0 1 2 3 4 5 Frames 1e7 20 10 0 10 20 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 2000 4000 6000 8000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 Re tu rn DPFRL DVRL GRU (a) Flickering Pong (b) Flickering ChopperCommand (c) Flickering MsPacman 0 1 2 3 4 5 Frames 1e7 2500 3000 3500 4000 4500 5000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 1000 1500 2000 2500 3000 3500 4000 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 200 220 240 260 280 300 Re tu rn DPFRL DVRL GRU (d) Flickering Centipede (e) Flickering BeamRider (f) Flickering Frostbite 0 1 2 3 4 5 Frames 1e7 24 26 28 30 32 34 36 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 10 9 8 7 6 5 4 Re tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0 Re tu rn DPFRL DVRL GRU (g) Flickering Bowling (h) Flickering IceHockey (i) Flickering DoubleDunk 0 1 2 3 4 5 Frames 1e7 1200 1400 1600 1800 2000 2200 Re tu rn DPFRL DVRL GRU (j) Flickering Asteroids Natural Flickering Atari Games. For Natural Flickering Atari Games we report results for a separate validation set, where the background videos are different from the training set. The validation environment steps once after every 100 training iterations. 0 1 2 3 4 5 Frames 1e7 20 10 0 10 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 500 750 1000 1250 1500 1750 2000 2250 Fe tu rn DPFRL DVRL GRU (a) Natural Flickeirng Pong (b) Natural Flickeirng ChopperCommand (c) Natural Flickeirng MsPacman 0 1 2 3 4 5 Frames 1e7 2250 2500 2750 3000 3250 3500 3750 4000 4250 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 300 350 400 450 500 550 600 650 700 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 75 100 125 150 175 200 225 250 Fe tu rn DPFRL DVRL GRU (d) Natural Flickeirng Centipede (e) Natural Flickeirng BeamRider (f) Natural Flickeirng Frostbite 0 1 2 3 4 5 Frames 1e7 20 22 24 26 28 30 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 13 12 11 10 9 8 7 6 5 Fe tu rn DPFRL DVRL GRU 0 1 2 3 4 5 Frames 1e7 20 19 18 17 16 15 14 Fe tu rn DPFRL DVRL GRU (g) Natural Flickeirng Bowling (h) Natural Flickeirng IceHockey (i) Natural Flickeirng DoubleDunk 0 1 2 3 4 5 Frames 1e7 800 1000 1200 1400 1600 1800 2000 2200 Fe tu rn DPFRL DVRL GRU (j) Natural Flickeirng Asteroids E.2 VISUAL NAVIGATION We present the reward curve for the Habitat visual navigation task below. DPFRL outperforms both GRU-based policy and DVRL given the same training time. DVRL struggles with training the observation model and fails during the first half of the training time. GRU based policy learns fast; given only the model-free belief tracker, it struggles to achieve higher reward after a certain point. We only provide the reward curve here as SPL and success rate are only evaluated after the training is finished. E.3 PARTICLE VISUALIZATION WITH PCA We further visualize the latent particles by principal component analysis (PCA) and choose the first 2 components. We choose a trajectory in the Habitat Visual Navigation experiment, where 15 particles are used. We observe that particles initially spread across the space (t = 0). As the robot only receive partial information in the visual navigation task, particles gradually form a distribution with two clusters (t = 56), which represent two major hypotheses of its current state. After more information is incorporated into the belief, they begin to converge and finally become a single cluster (t = 81). We did not observe particle depletion and posterior collapse in our experiment. This could be better avoided by adding an entropy loss to the learned variance of ftrans and we will leave it for future study. 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (a) t = 0 (b) t = 22 (c) t = 30 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 (d) t = 56 (e) t = 74 (f) t = 81
1. What is the specific question or problem addressed by the paper? 2. Is the approach well motivated, including being well-placed in literature? 3. Does the paper support its claims? Are results, whether theoretical or empirical, correct and scientifically rigorous? 4. Summarize what the paper claims to do or contribute. Be positive and generous. 5. Clearly state your decision (accept or reject) with one or two key reasons for this choice. 6. Provide supporting arguments for the reasons for the decision. 7. Provide additional feedback to improve the paper. Make it clear that these points are here to help and not necessarily part of your decision assessment.
Review
Review What is the specific question/problem tackled by the paper? Representation learning in POMDPs in order to ignore spurious information in observations. Is the approach well motivated, including being well-placed in the literature? Some comparisons to related work are missing; while the comparisons would enrich the paper, their absence is not fundamentally limiting to the conclusions. There's an additional PSR-related work that can be seen as learning representations for POMDPs (Guo et al., Neural predictive belief representations, arXiv:1811.06407). This work is in line with the work of Gregor et al., 2019, and both provide suitable representation learning techniques for POMDPs. These representation learning in the paper is based on action-conditional predictions of future quantities, which is complementary to the approach proposed in the paper. That is, one could conceive adding action-conditional predictions of the future with the particles as the RNN states. Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous. I think the support is somewhat adequate. The claim that the proposed method handles spurious information is well supported by the experiment in mountain hike, but not quite so by the Atari experiments. The performance (upon introduction of the "natural" on top of flickering) takes a big hit for both DPFRL and DVRL. Still, the performance improvement of DPFRL over DVRL is still an encouraging result. Summarize what the paper claims to do/contribute. Be positive and generous. The paper proposes a neural implementation of particle filters, by treating samples of RNN states as particles. The particles are used to estimate moment-generating functions evaluated at trained vectors, which in turn are supposed to provide more information for the policy's decision making. The paper uses a discriminator to shape the representation. The ablation study suggests that all three components (particles, MGFs & discrimination) are necessary. However, the third component has been shown not to be exclusively helpful for representation learning (Gregor et al., Guo et al.) I would suggest a study in comparison to Gregor et al.'s method (DRAW) instead. Clearly state your decision (accept or reject) with one or two key reasons for this choice. I vote for acceptance. Provide supporting arguments for the reasons for the decision. I think the algorithmic idea in this paper is a step in the right direction and can be of interest for the community. I would hope for the benchmarks to be more like the Habitat, and less like Atari with background videos. The conclusions in the latter benchmark seem less likely to apply to tasks in physically structured environments. Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment. I think it is important for the paper to qualify the kind of POMDPs being considered. The defining features of most of the environments being used is that the state is observed through a noisy channel. Many POMDPs are of interest because the observations are really providing partial information about the state, even if it is noiseless. This is the case for the Habitat setting. Because the paper's claims about the adequacy of the method for POMDPs rests on the choice of environments, I think it's important to quality what kind of POMDPs are being considered here. I would also caution against stating that the environment is closer to the real world. It would perhaps be better to say that the natural flickering is more interesting than the natural and the flickering because it benchmarks robustness to irrelevant information in observations, provided almost in tandem with state information; with intermittently missing observations. Please add some explanation about how the negative examples are sampled for the contrastive estimation.
ICLR
Title Implicit bias of gradient descent for mean squared error regression with wide neural networks Abstract We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smallest 2-norm of the weighted second derivative with respect to the input. The curvature penalty function 1/ζ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength. 1 INTRODUCTION Understanding why neural networks trained in the overparametrized regime and without explicit regularization generalize well in practice is an important problem (Zhang et al., 2017). Some form of capacity control different from network size must be at play (Neyshabur et al., 2014) and specifically the implicit bias of parameter optimization has been identified to play a key role (Neyshabur et al., 2017). By implicit bias we mean that among the many hypotheses that fit the training data, the algorithm selects one which satisfies additional properties that may be beneficial for its performance on new data. Jacot et al. (2018) and Lee et al. (2019) showed that the training dynamics of shallow and deep wide neural networks is well approximated by that of the linear Taylor approximation of the models at a suitable initialization. Chizat et al. (2019) observe that a model can converge to zero training loss while hardly varying its parameters, a phenomenon that can be attributed to scaling of the output weights and makes the model behave as its linearization around the initialization. Zhang et al. (2019) consider linearized models for regression problems and show that gradient flow finds the global minimum of the loss function which is closest to initialization in parameter space. This type of analysis connects with trajectory based analysis of neural networks (Saxe et al., 2014). Oymak and Soltanolkotabi (2019) studied the overparametrized neural networks directly and showed that gradient descent finds a global minimizer of the loss function which is close to the initialization. Towards interpreting parameters in function space, Savarese et al. (2019) and Ongie et al. (2020) studied infinite-width neural networks with parameters having bounded norm, in 1D and multi-dimensional input spaces, respectively. They showed that, under a standard parametrization, the complexity of the functions represented by the network, as measured by the 1-norm of the second derivative, can be controlled by the 2-norm of the parameters. Using these results, one can show that gradient descent with `2 weight penalty leads to simple functions. Sahs et al. (2020) relates function properties, such as breakpoint and slope distributions, to the distributions of the network parameters. The implicit bias of parameter optimization has been investigated in terms of the properties of the loss function at the points reached by different optimization methodologies (Keskar et al., 2017; Wu et al., 2017; Dinh et al., 2017). In terms of the solutions, Maennel et al. (2018) show that gradient flow for shallow networks with rectified linear units (ReLU) initialized close to zero quantizes features in a way that depends on the training data but not on the network size. Williams et al. (2019) obtained results for 1D regression contrasting the kernel and adaptive regimes. Soudry et al. (2018) show that in classification problems with separable data, gradient descent with linear networks converges to a maxmargin solution. Gunasekar et al. (2018b) present a result on implicit bias for deep linear convolutional networks, and Ji and Telgarsky (2019) study non-separable data. Chizat and Bach (2020) show that gradient flow for logistic regression with infinitely wide two-layer networks yields a max-margin classifier in a certain space. Gunasekar et al. (2018a) analyze the implicit bias of different optimization methods (natural gradient, steepest and mirror descent) for linear regression and separable linear classification problems, and obtain characterizations in terms of minimum norm or max-margin solutions. In this work, we study the implicit bias of gradient descent for regression problems. We focus on wide ReLU networks and describe the bias in function space. In Section 2 we provide settings and notation. We present our main results in Section 3, and develop the main theory in Sections 4 and 5. In the interest of a concise presentation, technical proofs and extended discussions are deferred to appendices. 2 NOTATION AND PROBLEM SETUP Consider a fully connected network with d inputs, one hidden layer of width n, and a single output. For any given input x∈Rd, the output of the network is f(x,θ)= n∑ i=1 W (2) i φ(〈W (1) i ,x〉+b (1) i )+b (2), (1) where φ is a point-wise activation function,W (1)∈Rn×d,W (2)∈Rn, b(1)∈Rn and b(2)∈R are the weights and biases of layer l=1,2. We write θ=vec(∪2l=1{W (l),b(l)}) for the vector of all network parameters. These parameters are initialized by independent samples of pre-specified random variables W and B in the following way: W (1) i,j d = √ 1/dW, b(1)i d = √ 1/d B W (2) i d = √ 1/nW, b(2) d= √ 1/n B. (2) More generally, we will also allow weight-bias pairs to be sampled from a joint distribution of (W,B) which we only assume to be sub-Gaussian. In the analysis of Jacot et al. (2018); Lee et al. (2019), W and B are Gaussian N (0,σ2). In the default initialization of PyTorch,W and B have uniform distribution U(−σ,σ). The setting (1) is known as the standard parametrization. Some works (Jacot et al., 2018; Lee et al., 2019) utilize the so-called NTK parametrization, where the factor √ 1/n is carried outside of the trainable parameter. If we fix the learning rate for all parameters, gradient descent leads to different trajectories under these two parametrizations. Our results are presented for the standard parametrization. Details on this in Appendix C.3. We consider a regression problem for data {(xj , yj)}Mj=1 with inputs X = {xj}Mj=1 and outputs Y = {yj}Mj=1. For a loss function ` : R × R → R, the empirical risk of our function is L(θ)= ∑M j=1`(f(xj ,θ),yj). We use full batch gradient descent with a fixed learning rate η to minimize L(θ). Writing θt for the parameter at time t, and θ0 for the initialization, this defines an iteration θt+1 =θt−η∇L(θ)=θt−η∇θf(X ,θt)T∇f(X ,θt)L, (3) where f(X ,θt)=[f(x1,θt),...,f(xM ,θt)]T is the vector of network outputs for all training inputs, and ∇f(X ,θt)L is the gradient of the loss with respect to the model outputs. We will use subscript i to index neurons and subscript t to index time. Let Θ̂n be the empirical neural tangent kernel (NTK) of the standard parametrization at time 0, which is the matrix Θ̂n= 1n∇θf(X ,θ0)∇θf(X ,θ0) T . 3 MAIN RESULTS AND DISCUSSION We obtain a description of the implicit bias in function space when applying gradient descent to regression problems with wide ReLU neural networks. We prove the following result in Appendix D. An interpretation of the result and generalizations are given further below. Theorem 1 (Implicit bias of gradient descent in wide ReLU networks). Consider a feedforward network with a single input unit, a hidden layer of n rectified linear units, and a single linear output unit. Assume standard parametrization (1) and that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W,B) (2) with joint density pW,B. Then, for any finite data set {(xj ,yj)}Mj=1 and sufficiently large n there exist constant u and v so that optimization of the mean square error on the adjusted training data {(xj , yj − uxj − v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which the output function f(x,θ∗) (1) attains zero training error. Furthermore, letting ζ(x)= ∫ R|W | 3pW,B(W,−Wx) dW and S=supp(ζ)∩[minixj ,maxixj ], we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈S (the 2-norm over S) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C2(S) ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx subject to g(xj)=yj−uxj−v, j=1,...,M. (4) Interpretation An intuitive interpretation of the theorem is that at those regions of the input space where ζ is smaller, we can expect the difference between the functions after and before training to have a small curvature. We may call ρ=1/ζ a curvature penalty function. The bias induced from initialization is expressed explicitly. We note that under suitable asymmetric parameter initialization (see Appendix C.2), it is possible to achieve f(·,θ0)≡0. Then the regularization is on the curvature of the output function itself. In Theorem 9 we obtain the explicit form of ζ for various common parameter initialization procedures. In particular, when the parameters are initialized independently from a uniform distribution on a finite interval, ζ is constant and the problem is solved by the natural cubic spline interpolation of the data. The adjustment of the training data simply accounts for the fact that second derivatives define a function only up to linear terms. In practice we can use the coefficients a and b of linear regression yj = axj + b+ j , j = 1, ... ,M , and set the adjusted data as {(xj , j)}Mj=1. Although Theorem 1 describes the gradient descent training with the linearly adjusted data, this result can also approximately describe training with the original training data. Further details are provided in Appendix L. We illustrate Theorem 1 numerically in Figure 1 and more extensively in Appendix A. In close agreement with the theory, the solution to the variational problem captures the solution of gradient descent training uniformly with error of order n−1/2. To illustrate the effect of the curvature penalty function, Figure 1 also shows the solutions to the variational problem for different values of ζ corresponding to different initialization distributions. We see that at input points where ζ is small / peaks strongly, the solution function tends to have a lower curvature / be able to use a higher curvature in order to fit the data. With the presented bias description we can formulate heuristics for parameter initialization either to ease optimization or also to induce specific smoothness priors on the solutions. In particular, by Proposition 8 any curvature penalty 1/ζ can be implemented by an appropriate choice of the parameter initialization distribution. By our analysis, the effective capacity of the model, understood as the set of possible output functions after training, is adapted to the sizeM of the training dataset and is well captured by a space of cubic splines relative to the initial function. This is a space with dimension of orderM independently of the number of parameters of the network. Strategy of the proof In Section 4, we observe that for a linearized model gradient descent with sufficiently small step size finds the minimizer of the training objective which is closest to the initial parameter (similar to a result by Zhang et al., 2019). Then Theorem 4 shows that the training dynamics of the linearization of a wide network is well approximated in parameter and function space by that of a lower dimensional linear model which trains only the output weights. This property is sometimes taken for granted and we show that it holds for the standard parametrization, although it does not hold for the NTK parametrization (defined in Appendix C.3), which leads to the adaptive regime. In Section 5, for networks with a single input and a single layer of ReLUs, we relate the implicit bias of gradient descent in parameter space to an alternative optimization problem. In Theorem 5 we show that the solution of this problem has a well defined limit as the width of the network tends to infinity, which allows us to obtain a variational formulation. In Theorem 6 we translate the description of the bias from parameter space to function space. In Theorem 9 we provide explicit descriptions of the weight function for various common initialization procedures. Finally, we can utilize recent results bounding the difference in function space of the solutions obtained from training a wide network and its linearization (Lee et al., 2019, Theorem H.1). Generalizations Theorem 4 has several generalizations elaborated in Appendix P. For multivariate regression, we have the following theorem. Theorem 2 (Multivariate regression). Use the same network setting as in Theorem 1 except that the number of input units changes to d. Assume that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W ,B) where W is a d-dimensional random vector and B is a random variable. Then, for any finite data set {(xj ,yj)}Mi=1 and sufficiently large n there exist constant vector u and constant v so that optimization of the mean square error on the adjusted training data {(xj ,yj−〈u,xj〉−v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which f(x,θ∗) attains zero training error. Furthermore, letU=‖W‖2, V = W/‖W‖2, C =−B/‖W‖2 and ζ(V ,c) = pV,C(V ,c)E(U2|V = V ,C = c) where pV,C is the joint density of (V ,C). Then we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈Rd (the 2-norm over Rd) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C(Rd) ∫ supp(ζ) ( R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c) )2 ζ(V ,c) dV dc subject to g(xj)=yj , j=1,...,M, R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c)=0, (V ,c) 6∈supp(ζ). (5) Here R is the Radon transform which is defined by R{f}(ω, b) := ∫ 〈ω,x〉=b f(x)ds(x), and the power of the negative Laplacian (−∆)(d+1)/2 is the operator defined in Fourier domain by ̂(−∆)(d+1)/2f(ξ)=‖ξ‖d+1f̂(ξ). For different activation functions, we have the following corollary. Corollary 3 (Different activation functions). Use the same setting as in Theorem 1 except that we use the activation function φ instead of ReLU. Suppose that φ is a Green’s function of a linear operator L, i.e. Lφ= δ, where δ denotes the Dirac delta function. Assume that the activation function φ is homogeneous of degree k, i.e. φ(ax)=akφ(x) for all a>0. Then we can find a function p satisfying Lp≡ 0 and adjust training data {(xj ,yj)}Mj=1 to {(xj ,yj−p(xj)}Mj=1. After that, the statement in Theorem 1 holds with the variational problem (4) changed to min g∈C2(S) ∫ S 1 ζ(x) [L(g(x)−f(x,θ0))]2 dx s.t. g(xj)=yj−p(xj), j=1,...,M, (6) where ζ(x)=pC(x)E(W2k|C=x) and S=supp(ζ)∩[minixi,maxixi]. Moreover, our method allows us to describe the optimization trajectory in function space (see Appendix N). If we substitute constraints g(xj)=yj in (4) by a quadratic term 1λ 1 M ∑M j=1(g(xj)−yj)2 added to the objective, we obtain the variational problem for a so-called spatially adaptive smoothing spline (see Abramovich and Steinberg, 1996; Pintore et al., 2006). This problem can be solved explicitly and can be shown to approximate early stopping. To be more specific, the solution to following optimization problem approximates the output function of the network after gradient descent training for t steps with learning rate η̄/n: min g∈C2(S) M∑ j=1 [g(xj)−yj ]2+ 1 η̄t ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx. (7) Related works Zhang et al. (2019) described the implicit bias of gradient descent in the kernel regime as minimizing a kernel norm from initialization, subject to fitting the training data. Our result can be regarded as making the kernel norm explicit, thus providing an interpretable description of the bias in function space and further illuminating the role of the parameter initialization procedure. We prove the equivalence in Appendix M. Savarese et al. (2019) showed that infinite-width networks with 2-norm weight regularization represent functions with smallest 1-norm of the second derivative, an example of which are linear splines. We discuss this in Appendix C.4. A recent preprint further develops this direction for two-layer networks with certain activation functions that interpolate data while minimizing a weight norm (Parhi and Nowak, 2019). In contrast, our result characterizes the solutions of training from a given initialization without explicit regularization, which turn out to minimize a weighted 2-norm of the second derivative and hence correspond to cubic splines. In finishing this work we became aware of a recent preprint (Heiss et al., 2019) which discusses ridge weight penalty, adaptive splines, and early stopping for one-input ReLU networks training only the output layer. Williams et al. (2019) showed a similar result in the kernel regime for shallow ReLU networks where they train only the second layer and from zero initialization. In contrast, we consider the initialization of the second layer and show that the difference from the initial output function is implicitly regularized by gradient descent. We show the result of training both layers and prove that it can be approximated by training only the second layer in Theorem 4. In addition, we give the explicit form of ζ in Theorem 9, while the ζ given by Williams et al. (2019) has a minor error because of a typo in their computation. Most importantly, our statement can be generalized to multivariate regression, different activation functions, training trajectories. 4 WIDE NETWORKS AND PARAMETER SPACE 4.1 IMPLICIT BIAS IN PARAMETER SPACE FOR A LINEARIZED MODEL In this section we describe how training a linearized network or a wide network by gradient descent leads to solutions that are biased, having parameter values close to the values at initialization. First, we consider the following linearized model: f lin(x,ω)=f(x,θ0)+∇θf(x,θ0)(ω−θ0). (8) We write ω for the parameter of the linearized model, in order to distinguish it from the parameter of the nonlinearized model. The empirical loss of the linearized model is defined by Llin(ω)= ∑M j=1`(f lin(xj ,ω),yj). The gradient descent iteration for the linearized model is given by ω0 =θ0, ωt+1 =ωt−η∇θf(X ,θ0)T∇f lin(X ,ωt)L lin. (9) Next, we consider wide neural networks. According to Lee et al. (2019, Theorem H.1), sup t ‖f lin(x,ωt)−f(x,θt)‖2 =O(n− 1 2 ) with arbitrarily high probability. So gradient descent training of a wide network or of the linearized model give similar trajectories and solutions in function space. Both fit the training data perfectly, meaning f lin(X ,ω∞)=f(X ,θ∞)=Y , and are also approximately equal outside the training data. Under the assumption that rank(∇θf(X ,θ0)) =M , the gradient descent iterations (9) converge to the unique global minimum that is closest to initialization (Gunasekar et al., 2018a; Zhang et al., 2019), which is the solution of following constrained optimization problem (further details and remarks are provided in Appendix E): min ω ‖ω−θ0‖2 s.t. f lin(X ,ω)=Y. (10) 4.2 TRAINING ONLY THE OUTPUT LAYER APPROXIMATES TRAINING ALL PARAMETERS From now on we consider networks with a single hidden layer of n ReLUs and a linear output f(x,θ) = ∑n i=1W (2) i [W (1) i x+ b (1) i ]+ + b (2). We show that the functions and parameter vectors obtained by training the linearized model are close to those obtained by training only the output layer. Hence, by the arguments of the previous section, training all parameters of a wide network or training only the output layer gives similar functions. Let θ0 = vec(W (1) ,b (1) ,W (2) ,b (2) ) be the parameter at initialization so that f lin(·,θ0) = f(·,θ0). After training the linearized network let the parameter be ω∞ = vec(Ŵ (1),b̂(1),Ŵ (2),b̂(2)). Using initialization (2), with probability arbitrarily close to 1,W (1) i ,b (1) i =O(1) andW (2) i ,b (2) =O(n− 1 2 ).1 Therefore, writingH for the Heaviside function, we have ∇ W (1) i ,b (1) i f(x,θ0)= [ W (2) i H(W (1) i x+b (1) )·x,W (2)i H(W (1) i x+b (1) i ) ] =O(n− 1 2 ), ∇ W (2) i ,b (2)f(x,θ0)= [ [W (1) i x+b (1) i ]+ ,1 ] =O(1). (11) So when n is large, if we use gradient descent with a constant learning rate for all parameters, then the changes ofW (1), b(1), b(2) are negligible compared with the changes ofW (2). So approximately we can train just the output weights,W (2)i ,i=1,...,n, and fix all other parameters. This corresponds to a smaller linear model. Let ω̃t = vec(W (1) t ,b (1) t ,W̃ (2) t ,b (2) t ) be the parameter at time t under the update rule whereW (1) ,b (1) , b (2) are kept fixed at their initial values, and W̃ (2) 0 =W (2) , W̃ (2) t+1 =W̃ (2) t −η∇W (2)Llin(ω̃t). (12) Let ω̃∞ = limt→∞ ω̃t. By the above discussion, we expect that f lin(x,ω̃∞) is close to f lin(x,ω∞). In fact, we prove the following for the MSE loss. The proof and further remarks are provided in Appendix F. We relate Theorem 4 to training a wide network in Appendix G. Theorem 4 (Training only output weights vs linearized network). Consider a finite data set {(xi,yi)}Mi=1. Assume that (1) we use the MSE loss `(ŷ,y) = 12‖ŷ−y‖ 2 2; (2) infnλmin(Θ̂n)> 0. Let ωt denote the parameters of the linearized model at time t when we train all parameters using (9), and let ω̃t denote the parameters at time t when we only train weights of the output layer using (12). If we use the same learning rate η in these two training processes and η < 2 nλmax(Θ̂n) , then for any x∈R, with probability arbitrarily close to 1 over the random initialization (2), sup t |f lin(x,ω̃t)−f lin(x,ωt)|=O(n−1), as n→∞. (13) Moreover, in terms of the parameter trajectories we have supt ‖W (1) t − Ŵ (1) t ‖2 = O(n−1), supt‖b (1) t −b̂ (1) t ‖2 =O(n−1), supt‖W̃ (2) t −Ŵ (2) t ‖2 =O(n−3/2), supt‖b (2) t −b̂ (2) t ‖=O(n−1). In view of the arguments in this section, in the next sections we will focus on training only the output weights and understanding the corresponding solution functions. 1More precisely, for any δ>0, ∃C, s.t. with prob. 1−δ, |W (2)i |,|b (2)|≤Cn−1/2 and |W (1)i |,|b (1) i |≤C. 5 GRADIENT DESCENT LEADS TO SIMPLE FUNCTIONS In this section we provide a function space characterization of the implicit bias previously described in parameter space. According to (10), gradient descent training of the output weights (12) achieves zero loss, f lin(xj ,ω̃∞)−f lin(xj ,θ0)= ∑n i=1(W̃ (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M , with minimum ‖W̃ (2)−W (2)‖22. Hence gradient descent is actually solving min W (2) ‖W (2)−W (2)‖22 s.t. n∑ i=1 (W (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M. (14) To simplify the presentation, in the following we let f lin(x,θ0) ≡ 0 by using the ASI trick (see Appendix C.2). The analysis still goes through without this. 5.1 INFINITE WIDTH LIMIT We reformulate problem (14) in a way that allows us to consider the limit of infinitely wide networks, with n → ∞, and obtain a deterministic counterpart, analogous to the convergence of the NTK. Let µn denote the empirical distribution of the samples (W (1) i , bi) n i=1, so that µn(A) = 1 n ∑n i=11A ( (W (1) i ,bi) ) . Here 1A is the indicator function for measurable subsets A in R2. We further consider a function αn : R2→R whose value encodes the difference of the output weight from its initialization for a hidden unit with input weight and bias given by the argument, αn(W (1) i ,bi)=n(W (2) i −W (2) i ). Then (14) with ASI can be rewritten as min αn∈C(R2) ∫ R2 α2n(W (1),b) dµn(W (1),b) s.t. ∫ R2 αn(W (1),b)[W (1)xj+b]+ dµn(W (1),b)=yj , (15) where j ranges from 1 toM . Here we minimize over functions αn inC(R2), but since only the values on (W (1)i ,bi) n i=1 are taken into account, we can take any continuous interpolation of αn(W (1) i ,bi), i=1,...,n. Now we can consider the infinite width limit. Let µ be the probability measure of (W,B). We obtain a continuous version of problem (15) by substituting µ for µn. Since we know that µn weakly converges to µ, we prove that in fact the solution of problem (15) converges to the solution of the continuous problem, which is formulated in the following theorem. Details in Appendix H. Theorem 5. Let (W (1)i ,bi)ni=1 be i.i.d. samples from a pair (W,B) of random variables with finite fourth moment. Suppose µn is the empirical distribution of (W (1) i ,bi) n i=1 and αn(W (1),b) is the solution of (15). Let α(W (1),b) be the solution of the continuous problem with µ in place of µn. Then for any bounded [−L,L], supx∈[−L,L]|gn(x,αn)−g(x,α)|=O(n−1/2) with high probability, where gn(x,αn)= ∫ R2αn(W (1),b)[W (1)x+b]+ dµn(W (1),b) is the function represented by a network with n hidden neurons after training, and g(x,α)= ∫ R2α(W (1),b)[W (1)x+b]+ dµ(W (1),b) is the function represented by the infinite-width network. 5.2 FUNCTION SPACE DESCRIPTION OF THE IMPLICIT BIAS Next we connect the problem from the previous section to second derivatives by first rewriting it in terms of breakpoints. Consider the breakpoint c=−b/W (1) of a ReLU with weightW (1) and bias b. We define a corresponding random variable C=−B/W and let ν denote the distribution of (W,C).2 Then with γ(W (1),c)=α(W (1),−cW (1)) the continuous version of (15) is equivalently given as min γ∈C(R2) ∫ R2 γ2(W (1),c) dν(W (1),c) s.t. ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , (16) where j ranges from 1 toM . Let νC denote the distribution of C=−B/W , and νW|C=c the conditional distribution of W given C = c. Suppose νC has support supp(νC) and a density function pC(c). 2Here we assume that P(W=0)=0 so that the random variable C is well defined. It is not an important restriction, since neurons with weightW (1)=0 give constant functions that can be absorbed in the bias of output layer. Let g(x,γ) = ∫ R2 γ(W (1), c)[W (1)(x− c)]+ dν(W (1), c), which again corresponds to the output function of the network. Then, the second derivative g′′ with respect to x (see Appendix I) satisfies g′′(x,γ)=pC(x) ∫ Rγ(W (1),x) ∣∣W (1)∣∣ dνW|C=x(W (1)). Thus γ(W (1),c) is closely related to g′′(x,γ) and we can try to express (16) in terms of g′′(x,γ). Since g′′(x,γ) determines g(x,γ) only up to linear functions, we consider the following problem: min γ∈C(R2),u∈R,v∈R ∫ R2 γ2(W (1),c) dν(W (1),c) subject to uxj+v+ ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , j=1,...,M. (17) Here u,v are not included in the cost. They add a linear function to the output of the neural network. If u and v in the solution of (17) are small, then the solution is close to the solution of (16). Ongie et al. (2020) also use this trick to simplify the characterization of neural networks in function space. Next we study the solution of (17) in function space. This is our main technical result. Theorem 6 (Implicit bias in function space). Assume W and B are random variables with P(W = 0) = 0, and let C =−B/W . Let ν denote the probability distribution of (W,C). Suppose (γ,u,v) is the solution of (17), and consider the corresponding output function g(x,(γ,u,v))=ux+v+ ∫ R2 γ(W (1),c)[W (1)(x−c)]+ dν(W (1),c). (18) Let νC denote the marginal distribution of C and assume it has a density function pC . Let E(W2|C) denote the conditional expectation ofW2 given C. Consider the function ζ(x)=pC(x)E(W2|C=x). Assume that training data xi∈supp(ζ), i=1,...,m. Consider the set S=supp(ζ)∩[minixi,maxixi]. Then g(x,(γ,u,v)) satisfies g′′(x,(γ,u,v))=0 for x 6∈S and for x∈S it is the solution of the following problem: min h∈C2(S) ∫ S (h′′(x))2 ζ(x) dx s.t. h(xj)=yj , j=1,...,m. (19) The proof is provided in Appendix I, where we also present the corresponding statement without ASI. We study the explicit form of this function in the next section. 5.3 EXPLICIT FORM OF THE CURVATURE PENALTY FUNCTION Proposition 7. Let pW,B denote the joint density function of (W,B) and let C =−B/W so that pC is the breakpoint density. Then ζ(x)=E(W 2|C=x)pC(x)= ∫ R|W | 3pW,B(W,−Wx) dW . The proof is presented in Appendix J. If we allow the initial weight and biases to be sampled from a suitable joint distribution, we can make the curvature penalty ρ=1/ζ arbitrary. Proposition 8 (Constructing any curvature penalty). Given any function % : R→ R>0, satisfying Z = ∫ R 1 % <∞, if we set the density of C as pC(x) = 1 Z 1 %(x) and make W independent of C with non-vanishing second moment, then (E(W 2|C=x)pC(x))−1 =(E(W 2)pC(x))−1∝%(x), x∈R. Further remarks on sampling and independent variables are provided in Appendix J. To conclude this section we compute the explicit form of ζ for several common initialization procedures. Theorem 9 (Explicit form of the curvature penalty for common initializations). (a) Gaussian initialization. Assume thatW and B are independent,W∼N (0,σ2w) and B∼N (0,σ2b ). Then ζ is given by ζ(x)= 2σ 3 wσ 3 b π(σ2b+x 2σ2w) 2 . (b) Binary-uniform initialization. Assume that W and B are independent, W ∈ {−1, 1} and B∼U(−ab,ab) with ab≥L. Then ζ is constant on [−L,L]. (c) Uniform initialization. Assume that W and B are independent, W ∼ U(−aw, aw) and B∼U(−ab,ab) with abaw ≥L. Then ζ is constant on [−L,L]. The proof is provided in Appendix K. Theorem 9 (b) and (c) show that for certain distributions of (W,B), ζ is constant. In this case problem (19) is solved by the cubic spline interpolation of the data with natural boundary conditions (Ahlberg et al., 1967). The case of general ζ is solved by space adaptive natural cubic splines, which can be computed numerically by solving a linear system and theoretically in an RKHS formalism. We provide details in Appendix O. 6 CONCLUSION AND DISCUSSION We obtained a explicit description of the implicit bias of gradient descent for mean squared error regression with wide shallow ReLU networks. We presented a result for the univariate case and generalizations to multi-variate ReLU networks and networks with different activation functions. Our result can also help us characterize the training trajectory of gradient descent in function space. Our main result shows that the trained network outputs a function that interpolates the training data and has the minimum possible weighted 2-norm of the second derivative with respect to the input. This corresponds to an spatially adaptive interpolating spline. The space of interpolating splines is a linear space which has a dimension that is linear in the number of data points. Hence our result means that, even if the network has many parameters, the complexity of the trained functions will be adjusted to the number of data points. Interpolating splines have been studied in great detail in the literature and our result allows us to directly apply corresponding generalization results to the case of trained networks. This is related to approximation theory and characterizations for the number of samples and their spacing needed in order to approximate functions from a given smoothness class to a desired precision (Rieger and Zwicknagl, 2010; Wendland, 2004). Zhang et al. (2019) described the implicit bias of gradient descent as minimizing a RKHS norm from initialization. Our result can be regarded as making the RKHS norm explicit, thus providing an interpretable description of the bias in function space. Compared with Zhang et al. (2019), our results give a precise description of the role of the parameter initialization scheme, which determines the inverse curvature penalty function ζ . This gives us a rather good picture of how the initialization affects the implicit bias of gradient descent. This could be used in order to select a good initialization scheme. For instance, one could conduct a pre-assessment of the data to estimate the locations of the input space where the target function has a high curvature, and choose the parameter initialization accordingly. This is an interesting possibility to experiment with, based on our theoretical result. Our result can also be interpreted in combination with early stopping. The training trajectory is approximated by a smoothing spline, meaning that the network will filter out high frequencies which are usually associated to noise in the training data. This behaviour is sometimes referred to as a spectral bias (Rahaman et al., 2019).
1. What is the focus of the paper, and what are the key contributions regarding implicit bias and gradient descent? 2. What are the strengths of the proposed approach, particularly in its ability to characterize the minimum-kernel-norm solution? 3. What are the weaknesses or limitations of the current presentation, such as the need for more detail on finding u and v or comparing with other works? 4. Are there any questions about the assumptions made in the paper, such as inf_n \lambda_\min(\Theta_n) > 0, and how they might impact the results? 5. Is there anything else that could enhance the paper's clarity, quality, novelty, or reproducibility?
Review
Review This paper analyzes the implicit bias of gradient descent on a wide two-layer network with standard parameterization and initialization and the squared loss. It is first proved that in this setting, gradient descent on both layers is close to gradient descent on the second layer. Then the implicit bias of gradient descent on the second layer is characterized for a 1-dimensional regression problem, which can also be generalized to the high-dimensional case. I think it is nice to have an explicit characterization of the minimum-kernel-norm solution. Moreover, the observation that gradient descent with the standard parameterization and initialization basically only trains the second layer is also interesting. However, the current presentation also has many limitations: Theorem 1, 2 and 6 consider gradient descent on an adjusted training set. Specifically, Theorem 1 and 2 claim the existence of u and v, which are used to adjust the training set, but it seems that how to find u and v is not discussed. Moreover, above Theorem 6, it is said that "If u and v in the solution of (17) are small, then the solution is close to the solution of (16)." How should we find u and v? Can it be proved that u and v are small? The function g given by Theorem 1 and 2 are similar to the results presented in (Savarese et al., 2019) and (Ongie et al., 2020). Can you include a detailed comparison with their results and proof techniques? In Theorem 4, it is assumed that inf_n \lambda_\min (\Theta_n) > 0. However, this can usually be proved in the NTK setting, for example in (Simon S. Du, Xiyu Zhai, Barnabas Poczos, Aarti Singh. Gradient Descent Provably Optimizes Over-parameterized Neural Networks). Can this assumption be proved? The appendices should be included.
ICLR
Title Implicit bias of gradient descent for mean squared error regression with wide neural networks Abstract We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smallest 2-norm of the weighted second derivative with respect to the input. The curvature penalty function 1/ζ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength. 1 INTRODUCTION Understanding why neural networks trained in the overparametrized regime and without explicit regularization generalize well in practice is an important problem (Zhang et al., 2017). Some form of capacity control different from network size must be at play (Neyshabur et al., 2014) and specifically the implicit bias of parameter optimization has been identified to play a key role (Neyshabur et al., 2017). By implicit bias we mean that among the many hypotheses that fit the training data, the algorithm selects one which satisfies additional properties that may be beneficial for its performance on new data. Jacot et al. (2018) and Lee et al. (2019) showed that the training dynamics of shallow and deep wide neural networks is well approximated by that of the linear Taylor approximation of the models at a suitable initialization. Chizat et al. (2019) observe that a model can converge to zero training loss while hardly varying its parameters, a phenomenon that can be attributed to scaling of the output weights and makes the model behave as its linearization around the initialization. Zhang et al. (2019) consider linearized models for regression problems and show that gradient flow finds the global minimum of the loss function which is closest to initialization in parameter space. This type of analysis connects with trajectory based analysis of neural networks (Saxe et al., 2014). Oymak and Soltanolkotabi (2019) studied the overparametrized neural networks directly and showed that gradient descent finds a global minimizer of the loss function which is close to the initialization. Towards interpreting parameters in function space, Savarese et al. (2019) and Ongie et al. (2020) studied infinite-width neural networks with parameters having bounded norm, in 1D and multi-dimensional input spaces, respectively. They showed that, under a standard parametrization, the complexity of the functions represented by the network, as measured by the 1-norm of the second derivative, can be controlled by the 2-norm of the parameters. Using these results, one can show that gradient descent with `2 weight penalty leads to simple functions. Sahs et al. (2020) relates function properties, such as breakpoint and slope distributions, to the distributions of the network parameters. The implicit bias of parameter optimization has been investigated in terms of the properties of the loss function at the points reached by different optimization methodologies (Keskar et al., 2017; Wu et al., 2017; Dinh et al., 2017). In terms of the solutions, Maennel et al. (2018) show that gradient flow for shallow networks with rectified linear units (ReLU) initialized close to zero quantizes features in a way that depends on the training data but not on the network size. Williams et al. (2019) obtained results for 1D regression contrasting the kernel and adaptive regimes. Soudry et al. (2018) show that in classification problems with separable data, gradient descent with linear networks converges to a maxmargin solution. Gunasekar et al. (2018b) present a result on implicit bias for deep linear convolutional networks, and Ji and Telgarsky (2019) study non-separable data. Chizat and Bach (2020) show that gradient flow for logistic regression with infinitely wide two-layer networks yields a max-margin classifier in a certain space. Gunasekar et al. (2018a) analyze the implicit bias of different optimization methods (natural gradient, steepest and mirror descent) for linear regression and separable linear classification problems, and obtain characterizations in terms of minimum norm or max-margin solutions. In this work, we study the implicit bias of gradient descent for regression problems. We focus on wide ReLU networks and describe the bias in function space. In Section 2 we provide settings and notation. We present our main results in Section 3, and develop the main theory in Sections 4 and 5. In the interest of a concise presentation, technical proofs and extended discussions are deferred to appendices. 2 NOTATION AND PROBLEM SETUP Consider a fully connected network with d inputs, one hidden layer of width n, and a single output. For any given input x∈Rd, the output of the network is f(x,θ)= n∑ i=1 W (2) i φ(〈W (1) i ,x〉+b (1) i )+b (2), (1) where φ is a point-wise activation function,W (1)∈Rn×d,W (2)∈Rn, b(1)∈Rn and b(2)∈R are the weights and biases of layer l=1,2. We write θ=vec(∪2l=1{W (l),b(l)}) for the vector of all network parameters. These parameters are initialized by independent samples of pre-specified random variables W and B in the following way: W (1) i,j d = √ 1/dW, b(1)i d = √ 1/d B W (2) i d = √ 1/nW, b(2) d= √ 1/n B. (2) More generally, we will also allow weight-bias pairs to be sampled from a joint distribution of (W,B) which we only assume to be sub-Gaussian. In the analysis of Jacot et al. (2018); Lee et al. (2019), W and B are Gaussian N (0,σ2). In the default initialization of PyTorch,W and B have uniform distribution U(−σ,σ). The setting (1) is known as the standard parametrization. Some works (Jacot et al., 2018; Lee et al., 2019) utilize the so-called NTK parametrization, where the factor √ 1/n is carried outside of the trainable parameter. If we fix the learning rate for all parameters, gradient descent leads to different trajectories under these two parametrizations. Our results are presented for the standard parametrization. Details on this in Appendix C.3. We consider a regression problem for data {(xj , yj)}Mj=1 with inputs X = {xj}Mj=1 and outputs Y = {yj}Mj=1. For a loss function ` : R × R → R, the empirical risk of our function is L(θ)= ∑M j=1`(f(xj ,θ),yj). We use full batch gradient descent with a fixed learning rate η to minimize L(θ). Writing θt for the parameter at time t, and θ0 for the initialization, this defines an iteration θt+1 =θt−η∇L(θ)=θt−η∇θf(X ,θt)T∇f(X ,θt)L, (3) where f(X ,θt)=[f(x1,θt),...,f(xM ,θt)]T is the vector of network outputs for all training inputs, and ∇f(X ,θt)L is the gradient of the loss with respect to the model outputs. We will use subscript i to index neurons and subscript t to index time. Let Θ̂n be the empirical neural tangent kernel (NTK) of the standard parametrization at time 0, which is the matrix Θ̂n= 1n∇θf(X ,θ0)∇θf(X ,θ0) T . 3 MAIN RESULTS AND DISCUSSION We obtain a description of the implicit bias in function space when applying gradient descent to regression problems with wide ReLU neural networks. We prove the following result in Appendix D. An interpretation of the result and generalizations are given further below. Theorem 1 (Implicit bias of gradient descent in wide ReLU networks). Consider a feedforward network with a single input unit, a hidden layer of n rectified linear units, and a single linear output unit. Assume standard parametrization (1) and that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W,B) (2) with joint density pW,B. Then, for any finite data set {(xj ,yj)}Mj=1 and sufficiently large n there exist constant u and v so that optimization of the mean square error on the adjusted training data {(xj , yj − uxj − v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which the output function f(x,θ∗) (1) attains zero training error. Furthermore, letting ζ(x)= ∫ R|W | 3pW,B(W,−Wx) dW and S=supp(ζ)∩[minixj ,maxixj ], we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈S (the 2-norm over S) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C2(S) ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx subject to g(xj)=yj−uxj−v, j=1,...,M. (4) Interpretation An intuitive interpretation of the theorem is that at those regions of the input space where ζ is smaller, we can expect the difference between the functions after and before training to have a small curvature. We may call ρ=1/ζ a curvature penalty function. The bias induced from initialization is expressed explicitly. We note that under suitable asymmetric parameter initialization (see Appendix C.2), it is possible to achieve f(·,θ0)≡0. Then the regularization is on the curvature of the output function itself. In Theorem 9 we obtain the explicit form of ζ for various common parameter initialization procedures. In particular, when the parameters are initialized independently from a uniform distribution on a finite interval, ζ is constant and the problem is solved by the natural cubic spline interpolation of the data. The adjustment of the training data simply accounts for the fact that second derivatives define a function only up to linear terms. In practice we can use the coefficients a and b of linear regression yj = axj + b+ j , j = 1, ... ,M , and set the adjusted data as {(xj , j)}Mj=1. Although Theorem 1 describes the gradient descent training with the linearly adjusted data, this result can also approximately describe training with the original training data. Further details are provided in Appendix L. We illustrate Theorem 1 numerically in Figure 1 and more extensively in Appendix A. In close agreement with the theory, the solution to the variational problem captures the solution of gradient descent training uniformly with error of order n−1/2. To illustrate the effect of the curvature penalty function, Figure 1 also shows the solutions to the variational problem for different values of ζ corresponding to different initialization distributions. We see that at input points where ζ is small / peaks strongly, the solution function tends to have a lower curvature / be able to use a higher curvature in order to fit the data. With the presented bias description we can formulate heuristics for parameter initialization either to ease optimization or also to induce specific smoothness priors on the solutions. In particular, by Proposition 8 any curvature penalty 1/ζ can be implemented by an appropriate choice of the parameter initialization distribution. By our analysis, the effective capacity of the model, understood as the set of possible output functions after training, is adapted to the sizeM of the training dataset and is well captured by a space of cubic splines relative to the initial function. This is a space with dimension of orderM independently of the number of parameters of the network. Strategy of the proof In Section 4, we observe that for a linearized model gradient descent with sufficiently small step size finds the minimizer of the training objective which is closest to the initial parameter (similar to a result by Zhang et al., 2019). Then Theorem 4 shows that the training dynamics of the linearization of a wide network is well approximated in parameter and function space by that of a lower dimensional linear model which trains only the output weights. This property is sometimes taken for granted and we show that it holds for the standard parametrization, although it does not hold for the NTK parametrization (defined in Appendix C.3), which leads to the adaptive regime. In Section 5, for networks with a single input and a single layer of ReLUs, we relate the implicit bias of gradient descent in parameter space to an alternative optimization problem. In Theorem 5 we show that the solution of this problem has a well defined limit as the width of the network tends to infinity, which allows us to obtain a variational formulation. In Theorem 6 we translate the description of the bias from parameter space to function space. In Theorem 9 we provide explicit descriptions of the weight function for various common initialization procedures. Finally, we can utilize recent results bounding the difference in function space of the solutions obtained from training a wide network and its linearization (Lee et al., 2019, Theorem H.1). Generalizations Theorem 4 has several generalizations elaborated in Appendix P. For multivariate regression, we have the following theorem. Theorem 2 (Multivariate regression). Use the same network setting as in Theorem 1 except that the number of input units changes to d. Assume that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W ,B) where W is a d-dimensional random vector and B is a random variable. Then, for any finite data set {(xj ,yj)}Mi=1 and sufficiently large n there exist constant vector u and constant v so that optimization of the mean square error on the adjusted training data {(xj ,yj−〈u,xj〉−v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which f(x,θ∗) attains zero training error. Furthermore, letU=‖W‖2, V = W/‖W‖2, C =−B/‖W‖2 and ζ(V ,c) = pV,C(V ,c)E(U2|V = V ,C = c) where pV,C is the joint density of (V ,C). Then we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈Rd (the 2-norm over Rd) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C(Rd) ∫ supp(ζ) ( R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c) )2 ζ(V ,c) dV dc subject to g(xj)=yj , j=1,...,M, R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c)=0, (V ,c) 6∈supp(ζ). (5) Here R is the Radon transform which is defined by R{f}(ω, b) := ∫ 〈ω,x〉=b f(x)ds(x), and the power of the negative Laplacian (−∆)(d+1)/2 is the operator defined in Fourier domain by ̂(−∆)(d+1)/2f(ξ)=‖ξ‖d+1f̂(ξ). For different activation functions, we have the following corollary. Corollary 3 (Different activation functions). Use the same setting as in Theorem 1 except that we use the activation function φ instead of ReLU. Suppose that φ is a Green’s function of a linear operator L, i.e. Lφ= δ, where δ denotes the Dirac delta function. Assume that the activation function φ is homogeneous of degree k, i.e. φ(ax)=akφ(x) for all a>0. Then we can find a function p satisfying Lp≡ 0 and adjust training data {(xj ,yj)}Mj=1 to {(xj ,yj−p(xj)}Mj=1. After that, the statement in Theorem 1 holds with the variational problem (4) changed to min g∈C2(S) ∫ S 1 ζ(x) [L(g(x)−f(x,θ0))]2 dx s.t. g(xj)=yj−p(xj), j=1,...,M, (6) where ζ(x)=pC(x)E(W2k|C=x) and S=supp(ζ)∩[minixi,maxixi]. Moreover, our method allows us to describe the optimization trajectory in function space (see Appendix N). If we substitute constraints g(xj)=yj in (4) by a quadratic term 1λ 1 M ∑M j=1(g(xj)−yj)2 added to the objective, we obtain the variational problem for a so-called spatially adaptive smoothing spline (see Abramovich and Steinberg, 1996; Pintore et al., 2006). This problem can be solved explicitly and can be shown to approximate early stopping. To be more specific, the solution to following optimization problem approximates the output function of the network after gradient descent training for t steps with learning rate η̄/n: min g∈C2(S) M∑ j=1 [g(xj)−yj ]2+ 1 η̄t ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx. (7) Related works Zhang et al. (2019) described the implicit bias of gradient descent in the kernel regime as minimizing a kernel norm from initialization, subject to fitting the training data. Our result can be regarded as making the kernel norm explicit, thus providing an interpretable description of the bias in function space and further illuminating the role of the parameter initialization procedure. We prove the equivalence in Appendix M. Savarese et al. (2019) showed that infinite-width networks with 2-norm weight regularization represent functions with smallest 1-norm of the second derivative, an example of which are linear splines. We discuss this in Appendix C.4. A recent preprint further develops this direction for two-layer networks with certain activation functions that interpolate data while minimizing a weight norm (Parhi and Nowak, 2019). In contrast, our result characterizes the solutions of training from a given initialization without explicit regularization, which turn out to minimize a weighted 2-norm of the second derivative and hence correspond to cubic splines. In finishing this work we became aware of a recent preprint (Heiss et al., 2019) which discusses ridge weight penalty, adaptive splines, and early stopping for one-input ReLU networks training only the output layer. Williams et al. (2019) showed a similar result in the kernel regime for shallow ReLU networks where they train only the second layer and from zero initialization. In contrast, we consider the initialization of the second layer and show that the difference from the initial output function is implicitly regularized by gradient descent. We show the result of training both layers and prove that it can be approximated by training only the second layer in Theorem 4. In addition, we give the explicit form of ζ in Theorem 9, while the ζ given by Williams et al. (2019) has a minor error because of a typo in their computation. Most importantly, our statement can be generalized to multivariate regression, different activation functions, training trajectories. 4 WIDE NETWORKS AND PARAMETER SPACE 4.1 IMPLICIT BIAS IN PARAMETER SPACE FOR A LINEARIZED MODEL In this section we describe how training a linearized network or a wide network by gradient descent leads to solutions that are biased, having parameter values close to the values at initialization. First, we consider the following linearized model: f lin(x,ω)=f(x,θ0)+∇θf(x,θ0)(ω−θ0). (8) We write ω for the parameter of the linearized model, in order to distinguish it from the parameter of the nonlinearized model. The empirical loss of the linearized model is defined by Llin(ω)= ∑M j=1`(f lin(xj ,ω),yj). The gradient descent iteration for the linearized model is given by ω0 =θ0, ωt+1 =ωt−η∇θf(X ,θ0)T∇f lin(X ,ωt)L lin. (9) Next, we consider wide neural networks. According to Lee et al. (2019, Theorem H.1), sup t ‖f lin(x,ωt)−f(x,θt)‖2 =O(n− 1 2 ) with arbitrarily high probability. So gradient descent training of a wide network or of the linearized model give similar trajectories and solutions in function space. Both fit the training data perfectly, meaning f lin(X ,ω∞)=f(X ,θ∞)=Y , and are also approximately equal outside the training data. Under the assumption that rank(∇θf(X ,θ0)) =M , the gradient descent iterations (9) converge to the unique global minimum that is closest to initialization (Gunasekar et al., 2018a; Zhang et al., 2019), which is the solution of following constrained optimization problem (further details and remarks are provided in Appendix E): min ω ‖ω−θ0‖2 s.t. f lin(X ,ω)=Y. (10) 4.2 TRAINING ONLY THE OUTPUT LAYER APPROXIMATES TRAINING ALL PARAMETERS From now on we consider networks with a single hidden layer of n ReLUs and a linear output f(x,θ) = ∑n i=1W (2) i [W (1) i x+ b (1) i ]+ + b (2). We show that the functions and parameter vectors obtained by training the linearized model are close to those obtained by training only the output layer. Hence, by the arguments of the previous section, training all parameters of a wide network or training only the output layer gives similar functions. Let θ0 = vec(W (1) ,b (1) ,W (2) ,b (2) ) be the parameter at initialization so that f lin(·,θ0) = f(·,θ0). After training the linearized network let the parameter be ω∞ = vec(Ŵ (1),b̂(1),Ŵ (2),b̂(2)). Using initialization (2), with probability arbitrarily close to 1,W (1) i ,b (1) i =O(1) andW (2) i ,b (2) =O(n− 1 2 ).1 Therefore, writingH for the Heaviside function, we have ∇ W (1) i ,b (1) i f(x,θ0)= [ W (2) i H(W (1) i x+b (1) )·x,W (2)i H(W (1) i x+b (1) i ) ] =O(n− 1 2 ), ∇ W (2) i ,b (2)f(x,θ0)= [ [W (1) i x+b (1) i ]+ ,1 ] =O(1). (11) So when n is large, if we use gradient descent with a constant learning rate for all parameters, then the changes ofW (1), b(1), b(2) are negligible compared with the changes ofW (2). So approximately we can train just the output weights,W (2)i ,i=1,...,n, and fix all other parameters. This corresponds to a smaller linear model. Let ω̃t = vec(W (1) t ,b (1) t ,W̃ (2) t ,b (2) t ) be the parameter at time t under the update rule whereW (1) ,b (1) , b (2) are kept fixed at their initial values, and W̃ (2) 0 =W (2) , W̃ (2) t+1 =W̃ (2) t −η∇W (2)Llin(ω̃t). (12) Let ω̃∞ = limt→∞ ω̃t. By the above discussion, we expect that f lin(x,ω̃∞) is close to f lin(x,ω∞). In fact, we prove the following for the MSE loss. The proof and further remarks are provided in Appendix F. We relate Theorem 4 to training a wide network in Appendix G. Theorem 4 (Training only output weights vs linearized network). Consider a finite data set {(xi,yi)}Mi=1. Assume that (1) we use the MSE loss `(ŷ,y) = 12‖ŷ−y‖ 2 2; (2) infnλmin(Θ̂n)> 0. Let ωt denote the parameters of the linearized model at time t when we train all parameters using (9), and let ω̃t denote the parameters at time t when we only train weights of the output layer using (12). If we use the same learning rate η in these two training processes and η < 2 nλmax(Θ̂n) , then for any x∈R, with probability arbitrarily close to 1 over the random initialization (2), sup t |f lin(x,ω̃t)−f lin(x,ωt)|=O(n−1), as n→∞. (13) Moreover, in terms of the parameter trajectories we have supt ‖W (1) t − Ŵ (1) t ‖2 = O(n−1), supt‖b (1) t −b̂ (1) t ‖2 =O(n−1), supt‖W̃ (2) t −Ŵ (2) t ‖2 =O(n−3/2), supt‖b (2) t −b̂ (2) t ‖=O(n−1). In view of the arguments in this section, in the next sections we will focus on training only the output weights and understanding the corresponding solution functions. 1More precisely, for any δ>0, ∃C, s.t. with prob. 1−δ, |W (2)i |,|b (2)|≤Cn−1/2 and |W (1)i |,|b (1) i |≤C. 5 GRADIENT DESCENT LEADS TO SIMPLE FUNCTIONS In this section we provide a function space characterization of the implicit bias previously described in parameter space. According to (10), gradient descent training of the output weights (12) achieves zero loss, f lin(xj ,ω̃∞)−f lin(xj ,θ0)= ∑n i=1(W̃ (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M , with minimum ‖W̃ (2)−W (2)‖22. Hence gradient descent is actually solving min W (2) ‖W (2)−W (2)‖22 s.t. n∑ i=1 (W (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M. (14) To simplify the presentation, in the following we let f lin(x,θ0) ≡ 0 by using the ASI trick (see Appendix C.2). The analysis still goes through without this. 5.1 INFINITE WIDTH LIMIT We reformulate problem (14) in a way that allows us to consider the limit of infinitely wide networks, with n → ∞, and obtain a deterministic counterpart, analogous to the convergence of the NTK. Let µn denote the empirical distribution of the samples (W (1) i , bi) n i=1, so that µn(A) = 1 n ∑n i=11A ( (W (1) i ,bi) ) . Here 1A is the indicator function for measurable subsets A in R2. We further consider a function αn : R2→R whose value encodes the difference of the output weight from its initialization for a hidden unit with input weight and bias given by the argument, αn(W (1) i ,bi)=n(W (2) i −W (2) i ). Then (14) with ASI can be rewritten as min αn∈C(R2) ∫ R2 α2n(W (1),b) dµn(W (1),b) s.t. ∫ R2 αn(W (1),b)[W (1)xj+b]+ dµn(W (1),b)=yj , (15) where j ranges from 1 toM . Here we minimize over functions αn inC(R2), but since only the values on (W (1)i ,bi) n i=1 are taken into account, we can take any continuous interpolation of αn(W (1) i ,bi), i=1,...,n. Now we can consider the infinite width limit. Let µ be the probability measure of (W,B). We obtain a continuous version of problem (15) by substituting µ for µn. Since we know that µn weakly converges to µ, we prove that in fact the solution of problem (15) converges to the solution of the continuous problem, which is formulated in the following theorem. Details in Appendix H. Theorem 5. Let (W (1)i ,bi)ni=1 be i.i.d. samples from a pair (W,B) of random variables with finite fourth moment. Suppose µn is the empirical distribution of (W (1) i ,bi) n i=1 and αn(W (1),b) is the solution of (15). Let α(W (1),b) be the solution of the continuous problem with µ in place of µn. Then for any bounded [−L,L], supx∈[−L,L]|gn(x,αn)−g(x,α)|=O(n−1/2) with high probability, where gn(x,αn)= ∫ R2αn(W (1),b)[W (1)x+b]+ dµn(W (1),b) is the function represented by a network with n hidden neurons after training, and g(x,α)= ∫ R2α(W (1),b)[W (1)x+b]+ dµ(W (1),b) is the function represented by the infinite-width network. 5.2 FUNCTION SPACE DESCRIPTION OF THE IMPLICIT BIAS Next we connect the problem from the previous section to second derivatives by first rewriting it in terms of breakpoints. Consider the breakpoint c=−b/W (1) of a ReLU with weightW (1) and bias b. We define a corresponding random variable C=−B/W and let ν denote the distribution of (W,C).2 Then with γ(W (1),c)=α(W (1),−cW (1)) the continuous version of (15) is equivalently given as min γ∈C(R2) ∫ R2 γ2(W (1),c) dν(W (1),c) s.t. ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , (16) where j ranges from 1 toM . Let νC denote the distribution of C=−B/W , and νW|C=c the conditional distribution of W given C = c. Suppose νC has support supp(νC) and a density function pC(c). 2Here we assume that P(W=0)=0 so that the random variable C is well defined. It is not an important restriction, since neurons with weightW (1)=0 give constant functions that can be absorbed in the bias of output layer. Let g(x,γ) = ∫ R2 γ(W (1), c)[W (1)(x− c)]+ dν(W (1), c), which again corresponds to the output function of the network. Then, the second derivative g′′ with respect to x (see Appendix I) satisfies g′′(x,γ)=pC(x) ∫ Rγ(W (1),x) ∣∣W (1)∣∣ dνW|C=x(W (1)). Thus γ(W (1),c) is closely related to g′′(x,γ) and we can try to express (16) in terms of g′′(x,γ). Since g′′(x,γ) determines g(x,γ) only up to linear functions, we consider the following problem: min γ∈C(R2),u∈R,v∈R ∫ R2 γ2(W (1),c) dν(W (1),c) subject to uxj+v+ ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , j=1,...,M. (17) Here u,v are not included in the cost. They add a linear function to the output of the neural network. If u and v in the solution of (17) are small, then the solution is close to the solution of (16). Ongie et al. (2020) also use this trick to simplify the characterization of neural networks in function space. Next we study the solution of (17) in function space. This is our main technical result. Theorem 6 (Implicit bias in function space). Assume W and B are random variables with P(W = 0) = 0, and let C =−B/W . Let ν denote the probability distribution of (W,C). Suppose (γ,u,v) is the solution of (17), and consider the corresponding output function g(x,(γ,u,v))=ux+v+ ∫ R2 γ(W (1),c)[W (1)(x−c)]+ dν(W (1),c). (18) Let νC denote the marginal distribution of C and assume it has a density function pC . Let E(W2|C) denote the conditional expectation ofW2 given C. Consider the function ζ(x)=pC(x)E(W2|C=x). Assume that training data xi∈supp(ζ), i=1,...,m. Consider the set S=supp(ζ)∩[minixi,maxixi]. Then g(x,(γ,u,v)) satisfies g′′(x,(γ,u,v))=0 for x 6∈S and for x∈S it is the solution of the following problem: min h∈C2(S) ∫ S (h′′(x))2 ζ(x) dx s.t. h(xj)=yj , j=1,...,m. (19) The proof is provided in Appendix I, where we also present the corresponding statement without ASI. We study the explicit form of this function in the next section. 5.3 EXPLICIT FORM OF THE CURVATURE PENALTY FUNCTION Proposition 7. Let pW,B denote the joint density function of (W,B) and let C =−B/W so that pC is the breakpoint density. Then ζ(x)=E(W 2|C=x)pC(x)= ∫ R|W | 3pW,B(W,−Wx) dW . The proof is presented in Appendix J. If we allow the initial weight and biases to be sampled from a suitable joint distribution, we can make the curvature penalty ρ=1/ζ arbitrary. Proposition 8 (Constructing any curvature penalty). Given any function % : R→ R>0, satisfying Z = ∫ R 1 % <∞, if we set the density of C as pC(x) = 1 Z 1 %(x) and make W independent of C with non-vanishing second moment, then (E(W 2|C=x)pC(x))−1 =(E(W 2)pC(x))−1∝%(x), x∈R. Further remarks on sampling and independent variables are provided in Appendix J. To conclude this section we compute the explicit form of ζ for several common initialization procedures. Theorem 9 (Explicit form of the curvature penalty for common initializations). (a) Gaussian initialization. Assume thatW and B are independent,W∼N (0,σ2w) and B∼N (0,σ2b ). Then ζ is given by ζ(x)= 2σ 3 wσ 3 b π(σ2b+x 2σ2w) 2 . (b) Binary-uniform initialization. Assume that W and B are independent, W ∈ {−1, 1} and B∼U(−ab,ab) with ab≥L. Then ζ is constant on [−L,L]. (c) Uniform initialization. Assume that W and B are independent, W ∼ U(−aw, aw) and B∼U(−ab,ab) with abaw ≥L. Then ζ is constant on [−L,L]. The proof is provided in Appendix K. Theorem 9 (b) and (c) show that for certain distributions of (W,B), ζ is constant. In this case problem (19) is solved by the cubic spline interpolation of the data with natural boundary conditions (Ahlberg et al., 1967). The case of general ζ is solved by space adaptive natural cubic splines, which can be computed numerically by solving a linear system and theoretically in an RKHS formalism. We provide details in Appendix O. 6 CONCLUSION AND DISCUSSION We obtained a explicit description of the implicit bias of gradient descent for mean squared error regression with wide shallow ReLU networks. We presented a result for the univariate case and generalizations to multi-variate ReLU networks and networks with different activation functions. Our result can also help us characterize the training trajectory of gradient descent in function space. Our main result shows that the trained network outputs a function that interpolates the training data and has the minimum possible weighted 2-norm of the second derivative with respect to the input. This corresponds to an spatially adaptive interpolating spline. The space of interpolating splines is a linear space which has a dimension that is linear in the number of data points. Hence our result means that, even if the network has many parameters, the complexity of the trained functions will be adjusted to the number of data points. Interpolating splines have been studied in great detail in the literature and our result allows us to directly apply corresponding generalization results to the case of trained networks. This is related to approximation theory and characterizations for the number of samples and their spacing needed in order to approximate functions from a given smoothness class to a desired precision (Rieger and Zwicknagl, 2010; Wendland, 2004). Zhang et al. (2019) described the implicit bias of gradient descent as minimizing a RKHS norm from initialization. Our result can be regarded as making the RKHS norm explicit, thus providing an interpretable description of the bias in function space. Compared with Zhang et al. (2019), our results give a precise description of the role of the parameter initialization scheme, which determines the inverse curvature penalty function ζ . This gives us a rather good picture of how the initialization affects the implicit bias of gradient descent. This could be used in order to select a good initialization scheme. For instance, one could conduct a pre-assessment of the data to estimate the locations of the input space where the target function has a high curvature, and choose the parameter initialization accordingly. This is an interesting possibility to experiment with, based on our theoretical result. Our result can also be interpreted in combination with early stopping. The training trajectory is approximated by a smoothing spline, meaning that the network will filter out high frequencies which are usually associated to noise in the training data. This behaviour is sometimes referred to as a spectral bias (Rahaman et al., 2019).
1. What is the focus of the paper regarding implicit bias in gradient descent for wide, 1-hidden layer networks used for regression? 2. What are the strengths of the paper, particularly in terms of its rigorous analysis and presentation of empirical and theoretical examples? 3. Are there any limitations or areas where the authors could provide more intuition or clarity, such as in the technical results for generalizations or the assumptions made in the paper? 4. How does the paper position itself relative to other works in the field, and what connections could be made between the implicit bias identified in this work and generalization? 5. Are there any minor comments or suggestions for improving the readability of the paper, such as reorganizing the introduction or correcting typos in the appendices?
Review
Review ###################################################################### Paper Summary This work analyzes the implicit bias of gradient descent for wide, 1 hidden layer networks used for regression and provides a characterization of this bias in function space. At a high level, as network width increases, gradient descent leads to a solution given by a variational problem penalizing the product of the curvature and the square of the second derivative. The proof of this proceeds as follows: (1) The solution given by gradient descent on the network can be approximated by the solution by gradient descent on a linear model. (2) Under the standard initialization, the solution for the linear model can be approximated by that of a smaller linear model (corresponding to training the last layer). (3) The implicit bias for this problem is linked to an alternate optimization problem. (4) The solution to this problem is given by solution to the the variational problem as network width goes to infinity. ###################################################################### Strengths 2.1. The results are presented rigorously and the authors consider a number of generalizations involving (1) alternate distributions for weights/biases (2) univariate and multivariate regression (3) alternate nonlinearities. In particular, an extended discussion of generalizations is provided in Appendix O (some of the discussion points in O.3 - O.5 would be nice to include in a conclusion). 2.2. The empirical evidence presented in the main text and Appendix was very useful in providing an intuitive explanation around Theorem 1. I also particularly liked that the authors provided an interpretation paragraph on page 2, which among other points addressed the reason for involving a linear adjustment to the training data. 2.3. The authors position their results well relative to a large number of related works. Appendix C clarified a lot of the points regarding related work in the main text especially regarding comparisons between the results of this work and those of Savarese et al. 2019. ###################################################################### Minor Limitations 3.1. While I found this to be an interesting and rigorous work, I feel it would be helpful if the authors could provide a bit more intuition around the technical results for the generalizations. In particular, I found the interpretation and visualizations presented in the work very helpful for understanding the implicit bias described by Theorem 1. However, it would be helpful if the authors could provide a similar intuition for the multi-variate regression setting. 3.2. I found this work a bit difficult to read through due to the several jumps between equation references. I think one thing that would help improve the readability is adjusting the Appendix such that related equations in the main text are above the related sections in the Appendix. One example of this is the need to jump back and forth between equations 15, 16, 17 of the main text in Appendix D. 3.3. I feel the authors could present some of the assumptions more clearly in the main text. As a quick example, I believe the authors assume in equation (9) that grad_(theta) f(X, theta_0) has rank M (as is done in Appendix E), but unless I'm mistaken, this is not clearly stated in the main text, and it would be nice to understand how this rank requirement relates to the width n of the network (or whether this is not an obvious relationship). 3.4. (Very Minor) I think this work could benefit from a discussion of how the implicit bias identified in this work could connect with generalization. For example, is there any way to understand which curvature penalties would yield solutions that generalize better? ###################################################################### Score and Rationale I would vote for accepting this paper. I found the result to be insightful in characterizing the inductive bias of 1 hidden layer fully connected networks used for regression. The author's present a rigorous analysis, which they complement with a number of empirical and theoretical examples. ###################################################################### Minor Comments 5.1. (Very minor style recommendation) The current introduction is a nice summary of related work, but I think the paper would be a bit more readable if the main results and discussion were placed in front of these related works and these works were merged with the other related works section. 5.2. (Minor typo) The last sentence of Appendix C.2 appears to be incomplete.
ICLR
Title Implicit bias of gradient descent for mean squared error regression with wide neural networks Abstract We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smallest 2-norm of the weighted second derivative with respect to the input. The curvature penalty function 1/ζ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength. 1 INTRODUCTION Understanding why neural networks trained in the overparametrized regime and without explicit regularization generalize well in practice is an important problem (Zhang et al., 2017). Some form of capacity control different from network size must be at play (Neyshabur et al., 2014) and specifically the implicit bias of parameter optimization has been identified to play a key role (Neyshabur et al., 2017). By implicit bias we mean that among the many hypotheses that fit the training data, the algorithm selects one which satisfies additional properties that may be beneficial for its performance on new data. Jacot et al. (2018) and Lee et al. (2019) showed that the training dynamics of shallow and deep wide neural networks is well approximated by that of the linear Taylor approximation of the models at a suitable initialization. Chizat et al. (2019) observe that a model can converge to zero training loss while hardly varying its parameters, a phenomenon that can be attributed to scaling of the output weights and makes the model behave as its linearization around the initialization. Zhang et al. (2019) consider linearized models for regression problems and show that gradient flow finds the global minimum of the loss function which is closest to initialization in parameter space. This type of analysis connects with trajectory based analysis of neural networks (Saxe et al., 2014). Oymak and Soltanolkotabi (2019) studied the overparametrized neural networks directly and showed that gradient descent finds a global minimizer of the loss function which is close to the initialization. Towards interpreting parameters in function space, Savarese et al. (2019) and Ongie et al. (2020) studied infinite-width neural networks with parameters having bounded norm, in 1D and multi-dimensional input spaces, respectively. They showed that, under a standard parametrization, the complexity of the functions represented by the network, as measured by the 1-norm of the second derivative, can be controlled by the 2-norm of the parameters. Using these results, one can show that gradient descent with `2 weight penalty leads to simple functions. Sahs et al. (2020) relates function properties, such as breakpoint and slope distributions, to the distributions of the network parameters. The implicit bias of parameter optimization has been investigated in terms of the properties of the loss function at the points reached by different optimization methodologies (Keskar et al., 2017; Wu et al., 2017; Dinh et al., 2017). In terms of the solutions, Maennel et al. (2018) show that gradient flow for shallow networks with rectified linear units (ReLU) initialized close to zero quantizes features in a way that depends on the training data but not on the network size. Williams et al. (2019) obtained results for 1D regression contrasting the kernel and adaptive regimes. Soudry et al. (2018) show that in classification problems with separable data, gradient descent with linear networks converges to a maxmargin solution. Gunasekar et al. (2018b) present a result on implicit bias for deep linear convolutional networks, and Ji and Telgarsky (2019) study non-separable data. Chizat and Bach (2020) show that gradient flow for logistic regression with infinitely wide two-layer networks yields a max-margin classifier in a certain space. Gunasekar et al. (2018a) analyze the implicit bias of different optimization methods (natural gradient, steepest and mirror descent) for linear regression and separable linear classification problems, and obtain characterizations in terms of minimum norm or max-margin solutions. In this work, we study the implicit bias of gradient descent for regression problems. We focus on wide ReLU networks and describe the bias in function space. In Section 2 we provide settings and notation. We present our main results in Section 3, and develop the main theory in Sections 4 and 5. In the interest of a concise presentation, technical proofs and extended discussions are deferred to appendices. 2 NOTATION AND PROBLEM SETUP Consider a fully connected network with d inputs, one hidden layer of width n, and a single output. For any given input x∈Rd, the output of the network is f(x,θ)= n∑ i=1 W (2) i φ(〈W (1) i ,x〉+b (1) i )+b (2), (1) where φ is a point-wise activation function,W (1)∈Rn×d,W (2)∈Rn, b(1)∈Rn and b(2)∈R are the weights and biases of layer l=1,2. We write θ=vec(∪2l=1{W (l),b(l)}) for the vector of all network parameters. These parameters are initialized by independent samples of pre-specified random variables W and B in the following way: W (1) i,j d = √ 1/dW, b(1)i d = √ 1/d B W (2) i d = √ 1/nW, b(2) d= √ 1/n B. (2) More generally, we will also allow weight-bias pairs to be sampled from a joint distribution of (W,B) which we only assume to be sub-Gaussian. In the analysis of Jacot et al. (2018); Lee et al. (2019), W and B are Gaussian N (0,σ2). In the default initialization of PyTorch,W and B have uniform distribution U(−σ,σ). The setting (1) is known as the standard parametrization. Some works (Jacot et al., 2018; Lee et al., 2019) utilize the so-called NTK parametrization, where the factor √ 1/n is carried outside of the trainable parameter. If we fix the learning rate for all parameters, gradient descent leads to different trajectories under these two parametrizations. Our results are presented for the standard parametrization. Details on this in Appendix C.3. We consider a regression problem for data {(xj , yj)}Mj=1 with inputs X = {xj}Mj=1 and outputs Y = {yj}Mj=1. For a loss function ` : R × R → R, the empirical risk of our function is L(θ)= ∑M j=1`(f(xj ,θ),yj). We use full batch gradient descent with a fixed learning rate η to minimize L(θ). Writing θt for the parameter at time t, and θ0 for the initialization, this defines an iteration θt+1 =θt−η∇L(θ)=θt−η∇θf(X ,θt)T∇f(X ,θt)L, (3) where f(X ,θt)=[f(x1,θt),...,f(xM ,θt)]T is the vector of network outputs for all training inputs, and ∇f(X ,θt)L is the gradient of the loss with respect to the model outputs. We will use subscript i to index neurons and subscript t to index time. Let Θ̂n be the empirical neural tangent kernel (NTK) of the standard parametrization at time 0, which is the matrix Θ̂n= 1n∇θf(X ,θ0)∇θf(X ,θ0) T . 3 MAIN RESULTS AND DISCUSSION We obtain a description of the implicit bias in function space when applying gradient descent to regression problems with wide ReLU neural networks. We prove the following result in Appendix D. An interpretation of the result and generalizations are given further below. Theorem 1 (Implicit bias of gradient descent in wide ReLU networks). Consider a feedforward network with a single input unit, a hidden layer of n rectified linear units, and a single linear output unit. Assume standard parametrization (1) and that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W,B) (2) with joint density pW,B. Then, for any finite data set {(xj ,yj)}Mj=1 and sufficiently large n there exist constant u and v so that optimization of the mean square error on the adjusted training data {(xj , yj − uxj − v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which the output function f(x,θ∗) (1) attains zero training error. Furthermore, letting ζ(x)= ∫ R|W | 3pW,B(W,−Wx) dW and S=supp(ζ)∩[minixj ,maxixj ], we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈S (the 2-norm over S) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C2(S) ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx subject to g(xj)=yj−uxj−v, j=1,...,M. (4) Interpretation An intuitive interpretation of the theorem is that at those regions of the input space where ζ is smaller, we can expect the difference between the functions after and before training to have a small curvature. We may call ρ=1/ζ a curvature penalty function. The bias induced from initialization is expressed explicitly. We note that under suitable asymmetric parameter initialization (see Appendix C.2), it is possible to achieve f(·,θ0)≡0. Then the regularization is on the curvature of the output function itself. In Theorem 9 we obtain the explicit form of ζ for various common parameter initialization procedures. In particular, when the parameters are initialized independently from a uniform distribution on a finite interval, ζ is constant and the problem is solved by the natural cubic spline interpolation of the data. The adjustment of the training data simply accounts for the fact that second derivatives define a function only up to linear terms. In practice we can use the coefficients a and b of linear regression yj = axj + b+ j , j = 1, ... ,M , and set the adjusted data as {(xj , j)}Mj=1. Although Theorem 1 describes the gradient descent training with the linearly adjusted data, this result can also approximately describe training with the original training data. Further details are provided in Appendix L. We illustrate Theorem 1 numerically in Figure 1 and more extensively in Appendix A. In close agreement with the theory, the solution to the variational problem captures the solution of gradient descent training uniformly with error of order n−1/2. To illustrate the effect of the curvature penalty function, Figure 1 also shows the solutions to the variational problem for different values of ζ corresponding to different initialization distributions. We see that at input points where ζ is small / peaks strongly, the solution function tends to have a lower curvature / be able to use a higher curvature in order to fit the data. With the presented bias description we can formulate heuristics for parameter initialization either to ease optimization or also to induce specific smoothness priors on the solutions. In particular, by Proposition 8 any curvature penalty 1/ζ can be implemented by an appropriate choice of the parameter initialization distribution. By our analysis, the effective capacity of the model, understood as the set of possible output functions after training, is adapted to the sizeM of the training dataset and is well captured by a space of cubic splines relative to the initial function. This is a space with dimension of orderM independently of the number of parameters of the network. Strategy of the proof In Section 4, we observe that for a linearized model gradient descent with sufficiently small step size finds the minimizer of the training objective which is closest to the initial parameter (similar to a result by Zhang et al., 2019). Then Theorem 4 shows that the training dynamics of the linearization of a wide network is well approximated in parameter and function space by that of a lower dimensional linear model which trains only the output weights. This property is sometimes taken for granted and we show that it holds for the standard parametrization, although it does not hold for the NTK parametrization (defined in Appendix C.3), which leads to the adaptive regime. In Section 5, for networks with a single input and a single layer of ReLUs, we relate the implicit bias of gradient descent in parameter space to an alternative optimization problem. In Theorem 5 we show that the solution of this problem has a well defined limit as the width of the network tends to infinity, which allows us to obtain a variational formulation. In Theorem 6 we translate the description of the bias from parameter space to function space. In Theorem 9 we provide explicit descriptions of the weight function for various common initialization procedures. Finally, we can utilize recent results bounding the difference in function space of the solutions obtained from training a wide network and its linearization (Lee et al., 2019, Theorem H.1). Generalizations Theorem 4 has several generalizations elaborated in Appendix P. For multivariate regression, we have the following theorem. Theorem 2 (Multivariate regression). Use the same network setting as in Theorem 1 except that the number of input units changes to d. Assume that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W ,B) where W is a d-dimensional random vector and B is a random variable. Then, for any finite data set {(xj ,yj)}Mi=1 and sufficiently large n there exist constant vector u and constant v so that optimization of the mean square error on the adjusted training data {(xj ,yj−〈u,xj〉−v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which f(x,θ∗) attains zero training error. Furthermore, letU=‖W‖2, V = W/‖W‖2, C =−B/‖W‖2 and ζ(V ,c) = pV,C(V ,c)E(U2|V = V ,C = c) where pV,C is the joint density of (V ,C). Then we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈Rd (the 2-norm over Rd) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C(Rd) ∫ supp(ζ) ( R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c) )2 ζ(V ,c) dV dc subject to g(xj)=yj , j=1,...,M, R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c)=0, (V ,c) 6∈supp(ζ). (5) Here R is the Radon transform which is defined by R{f}(ω, b) := ∫ 〈ω,x〉=b f(x)ds(x), and the power of the negative Laplacian (−∆)(d+1)/2 is the operator defined in Fourier domain by ̂(−∆)(d+1)/2f(ξ)=‖ξ‖d+1f̂(ξ). For different activation functions, we have the following corollary. Corollary 3 (Different activation functions). Use the same setting as in Theorem 1 except that we use the activation function φ instead of ReLU. Suppose that φ is a Green’s function of a linear operator L, i.e. Lφ= δ, where δ denotes the Dirac delta function. Assume that the activation function φ is homogeneous of degree k, i.e. φ(ax)=akφ(x) for all a>0. Then we can find a function p satisfying Lp≡ 0 and adjust training data {(xj ,yj)}Mj=1 to {(xj ,yj−p(xj)}Mj=1. After that, the statement in Theorem 1 holds with the variational problem (4) changed to min g∈C2(S) ∫ S 1 ζ(x) [L(g(x)−f(x,θ0))]2 dx s.t. g(xj)=yj−p(xj), j=1,...,M, (6) where ζ(x)=pC(x)E(W2k|C=x) and S=supp(ζ)∩[minixi,maxixi]. Moreover, our method allows us to describe the optimization trajectory in function space (see Appendix N). If we substitute constraints g(xj)=yj in (4) by a quadratic term 1λ 1 M ∑M j=1(g(xj)−yj)2 added to the objective, we obtain the variational problem for a so-called spatially adaptive smoothing spline (see Abramovich and Steinberg, 1996; Pintore et al., 2006). This problem can be solved explicitly and can be shown to approximate early stopping. To be more specific, the solution to following optimization problem approximates the output function of the network after gradient descent training for t steps with learning rate η̄/n: min g∈C2(S) M∑ j=1 [g(xj)−yj ]2+ 1 η̄t ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx. (7) Related works Zhang et al. (2019) described the implicit bias of gradient descent in the kernel regime as minimizing a kernel norm from initialization, subject to fitting the training data. Our result can be regarded as making the kernel norm explicit, thus providing an interpretable description of the bias in function space and further illuminating the role of the parameter initialization procedure. We prove the equivalence in Appendix M. Savarese et al. (2019) showed that infinite-width networks with 2-norm weight regularization represent functions with smallest 1-norm of the second derivative, an example of which are linear splines. We discuss this in Appendix C.4. A recent preprint further develops this direction for two-layer networks with certain activation functions that interpolate data while minimizing a weight norm (Parhi and Nowak, 2019). In contrast, our result characterizes the solutions of training from a given initialization without explicit regularization, which turn out to minimize a weighted 2-norm of the second derivative and hence correspond to cubic splines. In finishing this work we became aware of a recent preprint (Heiss et al., 2019) which discusses ridge weight penalty, adaptive splines, and early stopping for one-input ReLU networks training only the output layer. Williams et al. (2019) showed a similar result in the kernel regime for shallow ReLU networks where they train only the second layer and from zero initialization. In contrast, we consider the initialization of the second layer and show that the difference from the initial output function is implicitly regularized by gradient descent. We show the result of training both layers and prove that it can be approximated by training only the second layer in Theorem 4. In addition, we give the explicit form of ζ in Theorem 9, while the ζ given by Williams et al. (2019) has a minor error because of a typo in their computation. Most importantly, our statement can be generalized to multivariate regression, different activation functions, training trajectories. 4 WIDE NETWORKS AND PARAMETER SPACE 4.1 IMPLICIT BIAS IN PARAMETER SPACE FOR A LINEARIZED MODEL In this section we describe how training a linearized network or a wide network by gradient descent leads to solutions that are biased, having parameter values close to the values at initialization. First, we consider the following linearized model: f lin(x,ω)=f(x,θ0)+∇θf(x,θ0)(ω−θ0). (8) We write ω for the parameter of the linearized model, in order to distinguish it from the parameter of the nonlinearized model. The empirical loss of the linearized model is defined by Llin(ω)= ∑M j=1`(f lin(xj ,ω),yj). The gradient descent iteration for the linearized model is given by ω0 =θ0, ωt+1 =ωt−η∇θf(X ,θ0)T∇f lin(X ,ωt)L lin. (9) Next, we consider wide neural networks. According to Lee et al. (2019, Theorem H.1), sup t ‖f lin(x,ωt)−f(x,θt)‖2 =O(n− 1 2 ) with arbitrarily high probability. So gradient descent training of a wide network or of the linearized model give similar trajectories and solutions in function space. Both fit the training data perfectly, meaning f lin(X ,ω∞)=f(X ,θ∞)=Y , and are also approximately equal outside the training data. Under the assumption that rank(∇θf(X ,θ0)) =M , the gradient descent iterations (9) converge to the unique global minimum that is closest to initialization (Gunasekar et al., 2018a; Zhang et al., 2019), which is the solution of following constrained optimization problem (further details and remarks are provided in Appendix E): min ω ‖ω−θ0‖2 s.t. f lin(X ,ω)=Y. (10) 4.2 TRAINING ONLY THE OUTPUT LAYER APPROXIMATES TRAINING ALL PARAMETERS From now on we consider networks with a single hidden layer of n ReLUs and a linear output f(x,θ) = ∑n i=1W (2) i [W (1) i x+ b (1) i ]+ + b (2). We show that the functions and parameter vectors obtained by training the linearized model are close to those obtained by training only the output layer. Hence, by the arguments of the previous section, training all parameters of a wide network or training only the output layer gives similar functions. Let θ0 = vec(W (1) ,b (1) ,W (2) ,b (2) ) be the parameter at initialization so that f lin(·,θ0) = f(·,θ0). After training the linearized network let the parameter be ω∞ = vec(Ŵ (1),b̂(1),Ŵ (2),b̂(2)). Using initialization (2), with probability arbitrarily close to 1,W (1) i ,b (1) i =O(1) andW (2) i ,b (2) =O(n− 1 2 ).1 Therefore, writingH for the Heaviside function, we have ∇ W (1) i ,b (1) i f(x,θ0)= [ W (2) i H(W (1) i x+b (1) )·x,W (2)i H(W (1) i x+b (1) i ) ] =O(n− 1 2 ), ∇ W (2) i ,b (2)f(x,θ0)= [ [W (1) i x+b (1) i ]+ ,1 ] =O(1). (11) So when n is large, if we use gradient descent with a constant learning rate for all parameters, then the changes ofW (1), b(1), b(2) are negligible compared with the changes ofW (2). So approximately we can train just the output weights,W (2)i ,i=1,...,n, and fix all other parameters. This corresponds to a smaller linear model. Let ω̃t = vec(W (1) t ,b (1) t ,W̃ (2) t ,b (2) t ) be the parameter at time t under the update rule whereW (1) ,b (1) , b (2) are kept fixed at their initial values, and W̃ (2) 0 =W (2) , W̃ (2) t+1 =W̃ (2) t −η∇W (2)Llin(ω̃t). (12) Let ω̃∞ = limt→∞ ω̃t. By the above discussion, we expect that f lin(x,ω̃∞) is close to f lin(x,ω∞). In fact, we prove the following for the MSE loss. The proof and further remarks are provided in Appendix F. We relate Theorem 4 to training a wide network in Appendix G. Theorem 4 (Training only output weights vs linearized network). Consider a finite data set {(xi,yi)}Mi=1. Assume that (1) we use the MSE loss `(ŷ,y) = 12‖ŷ−y‖ 2 2; (2) infnλmin(Θ̂n)> 0. Let ωt denote the parameters of the linearized model at time t when we train all parameters using (9), and let ω̃t denote the parameters at time t when we only train weights of the output layer using (12). If we use the same learning rate η in these two training processes and η < 2 nλmax(Θ̂n) , then for any x∈R, with probability arbitrarily close to 1 over the random initialization (2), sup t |f lin(x,ω̃t)−f lin(x,ωt)|=O(n−1), as n→∞. (13) Moreover, in terms of the parameter trajectories we have supt ‖W (1) t − Ŵ (1) t ‖2 = O(n−1), supt‖b (1) t −b̂ (1) t ‖2 =O(n−1), supt‖W̃ (2) t −Ŵ (2) t ‖2 =O(n−3/2), supt‖b (2) t −b̂ (2) t ‖=O(n−1). In view of the arguments in this section, in the next sections we will focus on training only the output weights and understanding the corresponding solution functions. 1More precisely, for any δ>0, ∃C, s.t. with prob. 1−δ, |W (2)i |,|b (2)|≤Cn−1/2 and |W (1)i |,|b (1) i |≤C. 5 GRADIENT DESCENT LEADS TO SIMPLE FUNCTIONS In this section we provide a function space characterization of the implicit bias previously described in parameter space. According to (10), gradient descent training of the output weights (12) achieves zero loss, f lin(xj ,ω̃∞)−f lin(xj ,θ0)= ∑n i=1(W̃ (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M , with minimum ‖W̃ (2)−W (2)‖22. Hence gradient descent is actually solving min W (2) ‖W (2)−W (2)‖22 s.t. n∑ i=1 (W (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M. (14) To simplify the presentation, in the following we let f lin(x,θ0) ≡ 0 by using the ASI trick (see Appendix C.2). The analysis still goes through without this. 5.1 INFINITE WIDTH LIMIT We reformulate problem (14) in a way that allows us to consider the limit of infinitely wide networks, with n → ∞, and obtain a deterministic counterpart, analogous to the convergence of the NTK. Let µn denote the empirical distribution of the samples (W (1) i , bi) n i=1, so that µn(A) = 1 n ∑n i=11A ( (W (1) i ,bi) ) . Here 1A is the indicator function for measurable subsets A in R2. We further consider a function αn : R2→R whose value encodes the difference of the output weight from its initialization for a hidden unit with input weight and bias given by the argument, αn(W (1) i ,bi)=n(W (2) i −W (2) i ). Then (14) with ASI can be rewritten as min αn∈C(R2) ∫ R2 α2n(W (1),b) dµn(W (1),b) s.t. ∫ R2 αn(W (1),b)[W (1)xj+b]+ dµn(W (1),b)=yj , (15) where j ranges from 1 toM . Here we minimize over functions αn inC(R2), but since only the values on (W (1)i ,bi) n i=1 are taken into account, we can take any continuous interpolation of αn(W (1) i ,bi), i=1,...,n. Now we can consider the infinite width limit. Let µ be the probability measure of (W,B). We obtain a continuous version of problem (15) by substituting µ for µn. Since we know that µn weakly converges to µ, we prove that in fact the solution of problem (15) converges to the solution of the continuous problem, which is formulated in the following theorem. Details in Appendix H. Theorem 5. Let (W (1)i ,bi)ni=1 be i.i.d. samples from a pair (W,B) of random variables with finite fourth moment. Suppose µn is the empirical distribution of (W (1) i ,bi) n i=1 and αn(W (1),b) is the solution of (15). Let α(W (1),b) be the solution of the continuous problem with µ in place of µn. Then for any bounded [−L,L], supx∈[−L,L]|gn(x,αn)−g(x,α)|=O(n−1/2) with high probability, where gn(x,αn)= ∫ R2αn(W (1),b)[W (1)x+b]+ dµn(W (1),b) is the function represented by a network with n hidden neurons after training, and g(x,α)= ∫ R2α(W (1),b)[W (1)x+b]+ dµ(W (1),b) is the function represented by the infinite-width network. 5.2 FUNCTION SPACE DESCRIPTION OF THE IMPLICIT BIAS Next we connect the problem from the previous section to second derivatives by first rewriting it in terms of breakpoints. Consider the breakpoint c=−b/W (1) of a ReLU with weightW (1) and bias b. We define a corresponding random variable C=−B/W and let ν denote the distribution of (W,C).2 Then with γ(W (1),c)=α(W (1),−cW (1)) the continuous version of (15) is equivalently given as min γ∈C(R2) ∫ R2 γ2(W (1),c) dν(W (1),c) s.t. ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , (16) where j ranges from 1 toM . Let νC denote the distribution of C=−B/W , and νW|C=c the conditional distribution of W given C = c. Suppose νC has support supp(νC) and a density function pC(c). 2Here we assume that P(W=0)=0 so that the random variable C is well defined. It is not an important restriction, since neurons with weightW (1)=0 give constant functions that can be absorbed in the bias of output layer. Let g(x,γ) = ∫ R2 γ(W (1), c)[W (1)(x− c)]+ dν(W (1), c), which again corresponds to the output function of the network. Then, the second derivative g′′ with respect to x (see Appendix I) satisfies g′′(x,γ)=pC(x) ∫ Rγ(W (1),x) ∣∣W (1)∣∣ dνW|C=x(W (1)). Thus γ(W (1),c) is closely related to g′′(x,γ) and we can try to express (16) in terms of g′′(x,γ). Since g′′(x,γ) determines g(x,γ) only up to linear functions, we consider the following problem: min γ∈C(R2),u∈R,v∈R ∫ R2 γ2(W (1),c) dν(W (1),c) subject to uxj+v+ ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , j=1,...,M. (17) Here u,v are not included in the cost. They add a linear function to the output of the neural network. If u and v in the solution of (17) are small, then the solution is close to the solution of (16). Ongie et al. (2020) also use this trick to simplify the characterization of neural networks in function space. Next we study the solution of (17) in function space. This is our main technical result. Theorem 6 (Implicit bias in function space). Assume W and B are random variables with P(W = 0) = 0, and let C =−B/W . Let ν denote the probability distribution of (W,C). Suppose (γ,u,v) is the solution of (17), and consider the corresponding output function g(x,(γ,u,v))=ux+v+ ∫ R2 γ(W (1),c)[W (1)(x−c)]+ dν(W (1),c). (18) Let νC denote the marginal distribution of C and assume it has a density function pC . Let E(W2|C) denote the conditional expectation ofW2 given C. Consider the function ζ(x)=pC(x)E(W2|C=x). Assume that training data xi∈supp(ζ), i=1,...,m. Consider the set S=supp(ζ)∩[minixi,maxixi]. Then g(x,(γ,u,v)) satisfies g′′(x,(γ,u,v))=0 for x 6∈S and for x∈S it is the solution of the following problem: min h∈C2(S) ∫ S (h′′(x))2 ζ(x) dx s.t. h(xj)=yj , j=1,...,m. (19) The proof is provided in Appendix I, where we also present the corresponding statement without ASI. We study the explicit form of this function in the next section. 5.3 EXPLICIT FORM OF THE CURVATURE PENALTY FUNCTION Proposition 7. Let pW,B denote the joint density function of (W,B) and let C =−B/W so that pC is the breakpoint density. Then ζ(x)=E(W 2|C=x)pC(x)= ∫ R|W | 3pW,B(W,−Wx) dW . The proof is presented in Appendix J. If we allow the initial weight and biases to be sampled from a suitable joint distribution, we can make the curvature penalty ρ=1/ζ arbitrary. Proposition 8 (Constructing any curvature penalty). Given any function % : R→ R>0, satisfying Z = ∫ R 1 % <∞, if we set the density of C as pC(x) = 1 Z 1 %(x) and make W independent of C with non-vanishing second moment, then (E(W 2|C=x)pC(x))−1 =(E(W 2)pC(x))−1∝%(x), x∈R. Further remarks on sampling and independent variables are provided in Appendix J. To conclude this section we compute the explicit form of ζ for several common initialization procedures. Theorem 9 (Explicit form of the curvature penalty for common initializations). (a) Gaussian initialization. Assume thatW and B are independent,W∼N (0,σ2w) and B∼N (0,σ2b ). Then ζ is given by ζ(x)= 2σ 3 wσ 3 b π(σ2b+x 2σ2w) 2 . (b) Binary-uniform initialization. Assume that W and B are independent, W ∈ {−1, 1} and B∼U(−ab,ab) with ab≥L. Then ζ is constant on [−L,L]. (c) Uniform initialization. Assume that W and B are independent, W ∼ U(−aw, aw) and B∼U(−ab,ab) with abaw ≥L. Then ζ is constant on [−L,L]. The proof is provided in Appendix K. Theorem 9 (b) and (c) show that for certain distributions of (W,B), ζ is constant. In this case problem (19) is solved by the cubic spline interpolation of the data with natural boundary conditions (Ahlberg et al., 1967). The case of general ζ is solved by space adaptive natural cubic splines, which can be computed numerically by solving a linear system and theoretically in an RKHS formalism. We provide details in Appendix O. 6 CONCLUSION AND DISCUSSION We obtained a explicit description of the implicit bias of gradient descent for mean squared error regression with wide shallow ReLU networks. We presented a result for the univariate case and generalizations to multi-variate ReLU networks and networks with different activation functions. Our result can also help us characterize the training trajectory of gradient descent in function space. Our main result shows that the trained network outputs a function that interpolates the training data and has the minimum possible weighted 2-norm of the second derivative with respect to the input. This corresponds to an spatially adaptive interpolating spline. The space of interpolating splines is a linear space which has a dimension that is linear in the number of data points. Hence our result means that, even if the network has many parameters, the complexity of the trained functions will be adjusted to the number of data points. Interpolating splines have been studied in great detail in the literature and our result allows us to directly apply corresponding generalization results to the case of trained networks. This is related to approximation theory and characterizations for the number of samples and their spacing needed in order to approximate functions from a given smoothness class to a desired precision (Rieger and Zwicknagl, 2010; Wendland, 2004). Zhang et al. (2019) described the implicit bias of gradient descent as minimizing a RKHS norm from initialization. Our result can be regarded as making the RKHS norm explicit, thus providing an interpretable description of the bias in function space. Compared with Zhang et al. (2019), our results give a precise description of the role of the parameter initialization scheme, which determines the inverse curvature penalty function ζ . This gives us a rather good picture of how the initialization affects the implicit bias of gradient descent. This could be used in order to select a good initialization scheme. For instance, one could conduct a pre-assessment of the data to estimate the locations of the input space where the target function has a high curvature, and choose the parameter initialization accordingly. This is an interesting possibility to experiment with, based on our theoretical result. Our result can also be interpreted in combination with early stopping. The training trajectory is approximated by a smoothing spline, meaning that the network will filter out high frequencies which are usually associated to noise in the training data. This behaviour is sometimes referred to as a spectral bias (Rahaman et al., 2019).
1. What is the main contribution of the paper regarding neural networks and their regularization? 2. How does the paper explore the effective regularization associated with performing gradient descent on an unregularized squared error loss? 3. What is the relationship between the resulting regularizer and the weighted 2-norm of the second derivative? 4. How does the paper address an important open problem, provide interesting insights into the role of initialization and implicit bias associated with training? 5. What is the issue with the optimization occurring in the "kernel regime"? 6. How does the paper compare with the work of Savarese et al., particularly regarding explicit 2-norm weight decay regularization? 7. Why does the reviewer expect the Savarese result to extend to unregularized settings with gradient descent? 8. What is the potential discrepancy between the function space regularization calculated in Savarese and the paper under review? 9. How can the discussion in Section C.4 be improved to address this issue?
Review
Review This paper presents a function space view of 2-layer ReLU neural networks and the implicit regularization associated with full-batch gradient descent for various initializations of the weights. In past work, it was shown that for very wide neural networks, the global minimizer of a loss plus weight decay regularizer corresponds amounts to regularizing the total variation of the second derivative of a function (and related results in higher dimensions). This paper explores the effective regularization associated with performing gradient descent on an unregularized squared error loss. The resulting regularizer is akin to a weighted 2-norm of the second derivative, where the weighting function depends on the distribution of the initial weights. This result addresses an important open problem, provides interesting insights into the role of initialization and implicit bias associated with training, and is supported by nice illustrations. The literature review is strong and covers much of the relevant literature. Much of the analysis seems to depend on the optimization occurring in the "kernel regime". It is unclear when this is or is not a reasonable model. This issue is particularly salient in light of the comparison of the results with the work of Savarese et al. Savarese considers explicit 2-norm weight decay regularization. Although the paper under review considers unregularized losses, there are multiple studies showing that gradient descent initialized near zero induces 2-norm regularization. So a natural thought is that the Savarese result would also extend (with some non-trivial technical work) to unregularized settings with gradient descent. With this in mind, I expected to see the Savarese norm as a special case of the results of the paper under review, but this is not the case. In particular, the function space regularization calculated in Savarese is NOT an RKHS norm, while the paper under review claims the function space regularization they find IS a kernel norm. I would like to see a more detailed discussion of this potential discrepancy. Section C.4 does not make this clear.
ICLR
Title Implicit bias of gradient descent for mean squared error regression with wide neural networks Abstract We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smallest 2-norm of the weighted second derivative with respect to the input. The curvature penalty function 1/ζ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength. 1 INTRODUCTION Understanding why neural networks trained in the overparametrized regime and without explicit regularization generalize well in practice is an important problem (Zhang et al., 2017). Some form of capacity control different from network size must be at play (Neyshabur et al., 2014) and specifically the implicit bias of parameter optimization has been identified to play a key role (Neyshabur et al., 2017). By implicit bias we mean that among the many hypotheses that fit the training data, the algorithm selects one which satisfies additional properties that may be beneficial for its performance on new data. Jacot et al. (2018) and Lee et al. (2019) showed that the training dynamics of shallow and deep wide neural networks is well approximated by that of the linear Taylor approximation of the models at a suitable initialization. Chizat et al. (2019) observe that a model can converge to zero training loss while hardly varying its parameters, a phenomenon that can be attributed to scaling of the output weights and makes the model behave as its linearization around the initialization. Zhang et al. (2019) consider linearized models for regression problems and show that gradient flow finds the global minimum of the loss function which is closest to initialization in parameter space. This type of analysis connects with trajectory based analysis of neural networks (Saxe et al., 2014). Oymak and Soltanolkotabi (2019) studied the overparametrized neural networks directly and showed that gradient descent finds a global minimizer of the loss function which is close to the initialization. Towards interpreting parameters in function space, Savarese et al. (2019) and Ongie et al. (2020) studied infinite-width neural networks with parameters having bounded norm, in 1D and multi-dimensional input spaces, respectively. They showed that, under a standard parametrization, the complexity of the functions represented by the network, as measured by the 1-norm of the second derivative, can be controlled by the 2-norm of the parameters. Using these results, one can show that gradient descent with `2 weight penalty leads to simple functions. Sahs et al. (2020) relates function properties, such as breakpoint and slope distributions, to the distributions of the network parameters. The implicit bias of parameter optimization has been investigated in terms of the properties of the loss function at the points reached by different optimization methodologies (Keskar et al., 2017; Wu et al., 2017; Dinh et al., 2017). In terms of the solutions, Maennel et al. (2018) show that gradient flow for shallow networks with rectified linear units (ReLU) initialized close to zero quantizes features in a way that depends on the training data but not on the network size. Williams et al. (2019) obtained results for 1D regression contrasting the kernel and adaptive regimes. Soudry et al. (2018) show that in classification problems with separable data, gradient descent with linear networks converges to a maxmargin solution. Gunasekar et al. (2018b) present a result on implicit bias for deep linear convolutional networks, and Ji and Telgarsky (2019) study non-separable data. Chizat and Bach (2020) show that gradient flow for logistic regression with infinitely wide two-layer networks yields a max-margin classifier in a certain space. Gunasekar et al. (2018a) analyze the implicit bias of different optimization methods (natural gradient, steepest and mirror descent) for linear regression and separable linear classification problems, and obtain characterizations in terms of minimum norm or max-margin solutions. In this work, we study the implicit bias of gradient descent for regression problems. We focus on wide ReLU networks and describe the bias in function space. In Section 2 we provide settings and notation. We present our main results in Section 3, and develop the main theory in Sections 4 and 5. In the interest of a concise presentation, technical proofs and extended discussions are deferred to appendices. 2 NOTATION AND PROBLEM SETUP Consider a fully connected network with d inputs, one hidden layer of width n, and a single output. For any given input x∈Rd, the output of the network is f(x,θ)= n∑ i=1 W (2) i φ(〈W (1) i ,x〉+b (1) i )+b (2), (1) where φ is a point-wise activation function,W (1)∈Rn×d,W (2)∈Rn, b(1)∈Rn and b(2)∈R are the weights and biases of layer l=1,2. We write θ=vec(∪2l=1{W (l),b(l)}) for the vector of all network parameters. These parameters are initialized by independent samples of pre-specified random variables W and B in the following way: W (1) i,j d = √ 1/dW, b(1)i d = √ 1/d B W (2) i d = √ 1/nW, b(2) d= √ 1/n B. (2) More generally, we will also allow weight-bias pairs to be sampled from a joint distribution of (W,B) which we only assume to be sub-Gaussian. In the analysis of Jacot et al. (2018); Lee et al. (2019), W and B are Gaussian N (0,σ2). In the default initialization of PyTorch,W and B have uniform distribution U(−σ,σ). The setting (1) is known as the standard parametrization. Some works (Jacot et al., 2018; Lee et al., 2019) utilize the so-called NTK parametrization, where the factor √ 1/n is carried outside of the trainable parameter. If we fix the learning rate for all parameters, gradient descent leads to different trajectories under these two parametrizations. Our results are presented for the standard parametrization. Details on this in Appendix C.3. We consider a regression problem for data {(xj , yj)}Mj=1 with inputs X = {xj}Mj=1 and outputs Y = {yj}Mj=1. For a loss function ` : R × R → R, the empirical risk of our function is L(θ)= ∑M j=1`(f(xj ,θ),yj). We use full batch gradient descent with a fixed learning rate η to minimize L(θ). Writing θt for the parameter at time t, and θ0 for the initialization, this defines an iteration θt+1 =θt−η∇L(θ)=θt−η∇θf(X ,θt)T∇f(X ,θt)L, (3) where f(X ,θt)=[f(x1,θt),...,f(xM ,θt)]T is the vector of network outputs for all training inputs, and ∇f(X ,θt)L is the gradient of the loss with respect to the model outputs. We will use subscript i to index neurons and subscript t to index time. Let Θ̂n be the empirical neural tangent kernel (NTK) of the standard parametrization at time 0, which is the matrix Θ̂n= 1n∇θf(X ,θ0)∇θf(X ,θ0) T . 3 MAIN RESULTS AND DISCUSSION We obtain a description of the implicit bias in function space when applying gradient descent to regression problems with wide ReLU neural networks. We prove the following result in Appendix D. An interpretation of the result and generalizations are given further below. Theorem 1 (Implicit bias of gradient descent in wide ReLU networks). Consider a feedforward network with a single input unit, a hidden layer of n rectified linear units, and a single linear output unit. Assume standard parametrization (1) and that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W,B) (2) with joint density pW,B. Then, for any finite data set {(xj ,yj)}Mj=1 and sufficiently large n there exist constant u and v so that optimization of the mean square error on the adjusted training data {(xj , yj − uxj − v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which the output function f(x,θ∗) (1) attains zero training error. Furthermore, letting ζ(x)= ∫ R|W | 3pW,B(W,−Wx) dW and S=supp(ζ)∩[minixj ,maxixj ], we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈S (the 2-norm over S) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C2(S) ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx subject to g(xj)=yj−uxj−v, j=1,...,M. (4) Interpretation An intuitive interpretation of the theorem is that at those regions of the input space where ζ is smaller, we can expect the difference between the functions after and before training to have a small curvature. We may call ρ=1/ζ a curvature penalty function. The bias induced from initialization is expressed explicitly. We note that under suitable asymmetric parameter initialization (see Appendix C.2), it is possible to achieve f(·,θ0)≡0. Then the regularization is on the curvature of the output function itself. In Theorem 9 we obtain the explicit form of ζ for various common parameter initialization procedures. In particular, when the parameters are initialized independently from a uniform distribution on a finite interval, ζ is constant and the problem is solved by the natural cubic spline interpolation of the data. The adjustment of the training data simply accounts for the fact that second derivatives define a function only up to linear terms. In practice we can use the coefficients a and b of linear regression yj = axj + b+ j , j = 1, ... ,M , and set the adjusted data as {(xj , j)}Mj=1. Although Theorem 1 describes the gradient descent training with the linearly adjusted data, this result can also approximately describe training with the original training data. Further details are provided in Appendix L. We illustrate Theorem 1 numerically in Figure 1 and more extensively in Appendix A. In close agreement with the theory, the solution to the variational problem captures the solution of gradient descent training uniformly with error of order n−1/2. To illustrate the effect of the curvature penalty function, Figure 1 also shows the solutions to the variational problem for different values of ζ corresponding to different initialization distributions. We see that at input points where ζ is small / peaks strongly, the solution function tends to have a lower curvature / be able to use a higher curvature in order to fit the data. With the presented bias description we can formulate heuristics for parameter initialization either to ease optimization or also to induce specific smoothness priors on the solutions. In particular, by Proposition 8 any curvature penalty 1/ζ can be implemented by an appropriate choice of the parameter initialization distribution. By our analysis, the effective capacity of the model, understood as the set of possible output functions after training, is adapted to the sizeM of the training dataset and is well captured by a space of cubic splines relative to the initial function. This is a space with dimension of orderM independently of the number of parameters of the network. Strategy of the proof In Section 4, we observe that for a linearized model gradient descent with sufficiently small step size finds the minimizer of the training objective which is closest to the initial parameter (similar to a result by Zhang et al., 2019). Then Theorem 4 shows that the training dynamics of the linearization of a wide network is well approximated in parameter and function space by that of a lower dimensional linear model which trains only the output weights. This property is sometimes taken for granted and we show that it holds for the standard parametrization, although it does not hold for the NTK parametrization (defined in Appendix C.3), which leads to the adaptive regime. In Section 5, for networks with a single input and a single layer of ReLUs, we relate the implicit bias of gradient descent in parameter space to an alternative optimization problem. In Theorem 5 we show that the solution of this problem has a well defined limit as the width of the network tends to infinity, which allows us to obtain a variational formulation. In Theorem 6 we translate the description of the bias from parameter space to function space. In Theorem 9 we provide explicit descriptions of the weight function for various common initialization procedures. Finally, we can utilize recent results bounding the difference in function space of the solutions obtained from training a wide network and its linearization (Lee et al., 2019, Theorem H.1). Generalizations Theorem 4 has several generalizations elaborated in Appendix P. For multivariate regression, we have the following theorem. Theorem 2 (Multivariate regression). Use the same network setting as in Theorem 1 except that the number of input units changes to d. Assume that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W ,B) where W is a d-dimensional random vector and B is a random variable. Then, for any finite data set {(xj ,yj)}Mi=1 and sufficiently large n there exist constant vector u and constant v so that optimization of the mean square error on the adjusted training data {(xj ,yj−〈u,xj〉−v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which f(x,θ∗) attains zero training error. Furthermore, letU=‖W‖2, V = W/‖W‖2, C =−B/‖W‖2 and ζ(V ,c) = pV,C(V ,c)E(U2|V = V ,C = c) where pV,C is the joint density of (V ,C). Then we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈Rd (the 2-norm over Rd) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C(Rd) ∫ supp(ζ) ( R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c) )2 ζ(V ,c) dV dc subject to g(xj)=yj , j=1,...,M, R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c)=0, (V ,c) 6∈supp(ζ). (5) Here R is the Radon transform which is defined by R{f}(ω, b) := ∫ 〈ω,x〉=b f(x)ds(x), and the power of the negative Laplacian (−∆)(d+1)/2 is the operator defined in Fourier domain by ̂(−∆)(d+1)/2f(ξ)=‖ξ‖d+1f̂(ξ). For different activation functions, we have the following corollary. Corollary 3 (Different activation functions). Use the same setting as in Theorem 1 except that we use the activation function φ instead of ReLU. Suppose that φ is a Green’s function of a linear operator L, i.e. Lφ= δ, where δ denotes the Dirac delta function. Assume that the activation function φ is homogeneous of degree k, i.e. φ(ax)=akφ(x) for all a>0. Then we can find a function p satisfying Lp≡ 0 and adjust training data {(xj ,yj)}Mj=1 to {(xj ,yj−p(xj)}Mj=1. After that, the statement in Theorem 1 holds with the variational problem (4) changed to min g∈C2(S) ∫ S 1 ζ(x) [L(g(x)−f(x,θ0))]2 dx s.t. g(xj)=yj−p(xj), j=1,...,M, (6) where ζ(x)=pC(x)E(W2k|C=x) and S=supp(ζ)∩[minixi,maxixi]. Moreover, our method allows us to describe the optimization trajectory in function space (see Appendix N). If we substitute constraints g(xj)=yj in (4) by a quadratic term 1λ 1 M ∑M j=1(g(xj)−yj)2 added to the objective, we obtain the variational problem for a so-called spatially adaptive smoothing spline (see Abramovich and Steinberg, 1996; Pintore et al., 2006). This problem can be solved explicitly and can be shown to approximate early stopping. To be more specific, the solution to following optimization problem approximates the output function of the network after gradient descent training for t steps with learning rate η̄/n: min g∈C2(S) M∑ j=1 [g(xj)−yj ]2+ 1 η̄t ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx. (7) Related works Zhang et al. (2019) described the implicit bias of gradient descent in the kernel regime as minimizing a kernel norm from initialization, subject to fitting the training data. Our result can be regarded as making the kernel norm explicit, thus providing an interpretable description of the bias in function space and further illuminating the role of the parameter initialization procedure. We prove the equivalence in Appendix M. Savarese et al. (2019) showed that infinite-width networks with 2-norm weight regularization represent functions with smallest 1-norm of the second derivative, an example of which are linear splines. We discuss this in Appendix C.4. A recent preprint further develops this direction for two-layer networks with certain activation functions that interpolate data while minimizing a weight norm (Parhi and Nowak, 2019). In contrast, our result characterizes the solutions of training from a given initialization without explicit regularization, which turn out to minimize a weighted 2-norm of the second derivative and hence correspond to cubic splines. In finishing this work we became aware of a recent preprint (Heiss et al., 2019) which discusses ridge weight penalty, adaptive splines, and early stopping for one-input ReLU networks training only the output layer. Williams et al. (2019) showed a similar result in the kernel regime for shallow ReLU networks where they train only the second layer and from zero initialization. In contrast, we consider the initialization of the second layer and show that the difference from the initial output function is implicitly regularized by gradient descent. We show the result of training both layers and prove that it can be approximated by training only the second layer in Theorem 4. In addition, we give the explicit form of ζ in Theorem 9, while the ζ given by Williams et al. (2019) has a minor error because of a typo in their computation. Most importantly, our statement can be generalized to multivariate regression, different activation functions, training trajectories. 4 WIDE NETWORKS AND PARAMETER SPACE 4.1 IMPLICIT BIAS IN PARAMETER SPACE FOR A LINEARIZED MODEL In this section we describe how training a linearized network or a wide network by gradient descent leads to solutions that are biased, having parameter values close to the values at initialization. First, we consider the following linearized model: f lin(x,ω)=f(x,θ0)+∇θf(x,θ0)(ω−θ0). (8) We write ω for the parameter of the linearized model, in order to distinguish it from the parameter of the nonlinearized model. The empirical loss of the linearized model is defined by Llin(ω)= ∑M j=1`(f lin(xj ,ω),yj). The gradient descent iteration for the linearized model is given by ω0 =θ0, ωt+1 =ωt−η∇θf(X ,θ0)T∇f lin(X ,ωt)L lin. (9) Next, we consider wide neural networks. According to Lee et al. (2019, Theorem H.1), sup t ‖f lin(x,ωt)−f(x,θt)‖2 =O(n− 1 2 ) with arbitrarily high probability. So gradient descent training of a wide network or of the linearized model give similar trajectories and solutions in function space. Both fit the training data perfectly, meaning f lin(X ,ω∞)=f(X ,θ∞)=Y , and are also approximately equal outside the training data. Under the assumption that rank(∇θf(X ,θ0)) =M , the gradient descent iterations (9) converge to the unique global minimum that is closest to initialization (Gunasekar et al., 2018a; Zhang et al., 2019), which is the solution of following constrained optimization problem (further details and remarks are provided in Appendix E): min ω ‖ω−θ0‖2 s.t. f lin(X ,ω)=Y. (10) 4.2 TRAINING ONLY THE OUTPUT LAYER APPROXIMATES TRAINING ALL PARAMETERS From now on we consider networks with a single hidden layer of n ReLUs and a linear output f(x,θ) = ∑n i=1W (2) i [W (1) i x+ b (1) i ]+ + b (2). We show that the functions and parameter vectors obtained by training the linearized model are close to those obtained by training only the output layer. Hence, by the arguments of the previous section, training all parameters of a wide network or training only the output layer gives similar functions. Let θ0 = vec(W (1) ,b (1) ,W (2) ,b (2) ) be the parameter at initialization so that f lin(·,θ0) = f(·,θ0). After training the linearized network let the parameter be ω∞ = vec(Ŵ (1),b̂(1),Ŵ (2),b̂(2)). Using initialization (2), with probability arbitrarily close to 1,W (1) i ,b (1) i =O(1) andW (2) i ,b (2) =O(n− 1 2 ).1 Therefore, writingH for the Heaviside function, we have ∇ W (1) i ,b (1) i f(x,θ0)= [ W (2) i H(W (1) i x+b (1) )·x,W (2)i H(W (1) i x+b (1) i ) ] =O(n− 1 2 ), ∇ W (2) i ,b (2)f(x,θ0)= [ [W (1) i x+b (1) i ]+ ,1 ] =O(1). (11) So when n is large, if we use gradient descent with a constant learning rate for all parameters, then the changes ofW (1), b(1), b(2) are negligible compared with the changes ofW (2). So approximately we can train just the output weights,W (2)i ,i=1,...,n, and fix all other parameters. This corresponds to a smaller linear model. Let ω̃t = vec(W (1) t ,b (1) t ,W̃ (2) t ,b (2) t ) be the parameter at time t under the update rule whereW (1) ,b (1) , b (2) are kept fixed at their initial values, and W̃ (2) 0 =W (2) , W̃ (2) t+1 =W̃ (2) t −η∇W (2)Llin(ω̃t). (12) Let ω̃∞ = limt→∞ ω̃t. By the above discussion, we expect that f lin(x,ω̃∞) is close to f lin(x,ω∞). In fact, we prove the following for the MSE loss. The proof and further remarks are provided in Appendix F. We relate Theorem 4 to training a wide network in Appendix G. Theorem 4 (Training only output weights vs linearized network). Consider a finite data set {(xi,yi)}Mi=1. Assume that (1) we use the MSE loss `(ŷ,y) = 12‖ŷ−y‖ 2 2; (2) infnλmin(Θ̂n)> 0. Let ωt denote the parameters of the linearized model at time t when we train all parameters using (9), and let ω̃t denote the parameters at time t when we only train weights of the output layer using (12). If we use the same learning rate η in these two training processes and η < 2 nλmax(Θ̂n) , then for any x∈R, with probability arbitrarily close to 1 over the random initialization (2), sup t |f lin(x,ω̃t)−f lin(x,ωt)|=O(n−1), as n→∞. (13) Moreover, in terms of the parameter trajectories we have supt ‖W (1) t − Ŵ (1) t ‖2 = O(n−1), supt‖b (1) t −b̂ (1) t ‖2 =O(n−1), supt‖W̃ (2) t −Ŵ (2) t ‖2 =O(n−3/2), supt‖b (2) t −b̂ (2) t ‖=O(n−1). In view of the arguments in this section, in the next sections we will focus on training only the output weights and understanding the corresponding solution functions. 1More precisely, for any δ>0, ∃C, s.t. with prob. 1−δ, |W (2)i |,|b (2)|≤Cn−1/2 and |W (1)i |,|b (1) i |≤C. 5 GRADIENT DESCENT LEADS TO SIMPLE FUNCTIONS In this section we provide a function space characterization of the implicit bias previously described in parameter space. According to (10), gradient descent training of the output weights (12) achieves zero loss, f lin(xj ,ω̃∞)−f lin(xj ,θ0)= ∑n i=1(W̃ (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M , with minimum ‖W̃ (2)−W (2)‖22. Hence gradient descent is actually solving min W (2) ‖W (2)−W (2)‖22 s.t. n∑ i=1 (W (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M. (14) To simplify the presentation, in the following we let f lin(x,θ0) ≡ 0 by using the ASI trick (see Appendix C.2). The analysis still goes through without this. 5.1 INFINITE WIDTH LIMIT We reformulate problem (14) in a way that allows us to consider the limit of infinitely wide networks, with n → ∞, and obtain a deterministic counterpart, analogous to the convergence of the NTK. Let µn denote the empirical distribution of the samples (W (1) i , bi) n i=1, so that µn(A) = 1 n ∑n i=11A ( (W (1) i ,bi) ) . Here 1A is the indicator function for measurable subsets A in R2. We further consider a function αn : R2→R whose value encodes the difference of the output weight from its initialization for a hidden unit with input weight and bias given by the argument, αn(W (1) i ,bi)=n(W (2) i −W (2) i ). Then (14) with ASI can be rewritten as min αn∈C(R2) ∫ R2 α2n(W (1),b) dµn(W (1),b) s.t. ∫ R2 αn(W (1),b)[W (1)xj+b]+ dµn(W (1),b)=yj , (15) where j ranges from 1 toM . Here we minimize over functions αn inC(R2), but since only the values on (W (1)i ,bi) n i=1 are taken into account, we can take any continuous interpolation of αn(W (1) i ,bi), i=1,...,n. Now we can consider the infinite width limit. Let µ be the probability measure of (W,B). We obtain a continuous version of problem (15) by substituting µ for µn. Since we know that µn weakly converges to µ, we prove that in fact the solution of problem (15) converges to the solution of the continuous problem, which is formulated in the following theorem. Details in Appendix H. Theorem 5. Let (W (1)i ,bi)ni=1 be i.i.d. samples from a pair (W,B) of random variables with finite fourth moment. Suppose µn is the empirical distribution of (W (1) i ,bi) n i=1 and αn(W (1),b) is the solution of (15). Let α(W (1),b) be the solution of the continuous problem with µ in place of µn. Then for any bounded [−L,L], supx∈[−L,L]|gn(x,αn)−g(x,α)|=O(n−1/2) with high probability, where gn(x,αn)= ∫ R2αn(W (1),b)[W (1)x+b]+ dµn(W (1),b) is the function represented by a network with n hidden neurons after training, and g(x,α)= ∫ R2α(W (1),b)[W (1)x+b]+ dµ(W (1),b) is the function represented by the infinite-width network. 5.2 FUNCTION SPACE DESCRIPTION OF THE IMPLICIT BIAS Next we connect the problem from the previous section to second derivatives by first rewriting it in terms of breakpoints. Consider the breakpoint c=−b/W (1) of a ReLU with weightW (1) and bias b. We define a corresponding random variable C=−B/W and let ν denote the distribution of (W,C).2 Then with γ(W (1),c)=α(W (1),−cW (1)) the continuous version of (15) is equivalently given as min γ∈C(R2) ∫ R2 γ2(W (1),c) dν(W (1),c) s.t. ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , (16) where j ranges from 1 toM . Let νC denote the distribution of C=−B/W , and νW|C=c the conditional distribution of W given C = c. Suppose νC has support supp(νC) and a density function pC(c). 2Here we assume that P(W=0)=0 so that the random variable C is well defined. It is not an important restriction, since neurons with weightW (1)=0 give constant functions that can be absorbed in the bias of output layer. Let g(x,γ) = ∫ R2 γ(W (1), c)[W (1)(x− c)]+ dν(W (1), c), which again corresponds to the output function of the network. Then, the second derivative g′′ with respect to x (see Appendix I) satisfies g′′(x,γ)=pC(x) ∫ Rγ(W (1),x) ∣∣W (1)∣∣ dνW|C=x(W (1)). Thus γ(W (1),c) is closely related to g′′(x,γ) and we can try to express (16) in terms of g′′(x,γ). Since g′′(x,γ) determines g(x,γ) only up to linear functions, we consider the following problem: min γ∈C(R2),u∈R,v∈R ∫ R2 γ2(W (1),c) dν(W (1),c) subject to uxj+v+ ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , j=1,...,M. (17) Here u,v are not included in the cost. They add a linear function to the output of the neural network. If u and v in the solution of (17) are small, then the solution is close to the solution of (16). Ongie et al. (2020) also use this trick to simplify the characterization of neural networks in function space. Next we study the solution of (17) in function space. This is our main technical result. Theorem 6 (Implicit bias in function space). Assume W and B are random variables with P(W = 0) = 0, and let C =−B/W . Let ν denote the probability distribution of (W,C). Suppose (γ,u,v) is the solution of (17), and consider the corresponding output function g(x,(γ,u,v))=ux+v+ ∫ R2 γ(W (1),c)[W (1)(x−c)]+ dν(W (1),c). (18) Let νC denote the marginal distribution of C and assume it has a density function pC . Let E(W2|C) denote the conditional expectation ofW2 given C. Consider the function ζ(x)=pC(x)E(W2|C=x). Assume that training data xi∈supp(ζ), i=1,...,m. Consider the set S=supp(ζ)∩[minixi,maxixi]. Then g(x,(γ,u,v)) satisfies g′′(x,(γ,u,v))=0 for x 6∈S and for x∈S it is the solution of the following problem: min h∈C2(S) ∫ S (h′′(x))2 ζ(x) dx s.t. h(xj)=yj , j=1,...,m. (19) The proof is provided in Appendix I, where we also present the corresponding statement without ASI. We study the explicit form of this function in the next section. 5.3 EXPLICIT FORM OF THE CURVATURE PENALTY FUNCTION Proposition 7. Let pW,B denote the joint density function of (W,B) and let C =−B/W so that pC is the breakpoint density. Then ζ(x)=E(W 2|C=x)pC(x)= ∫ R|W | 3pW,B(W,−Wx) dW . The proof is presented in Appendix J. If we allow the initial weight and biases to be sampled from a suitable joint distribution, we can make the curvature penalty ρ=1/ζ arbitrary. Proposition 8 (Constructing any curvature penalty). Given any function % : R→ R>0, satisfying Z = ∫ R 1 % <∞, if we set the density of C as pC(x) = 1 Z 1 %(x) and make W independent of C with non-vanishing second moment, then (E(W 2|C=x)pC(x))−1 =(E(W 2)pC(x))−1∝%(x), x∈R. Further remarks on sampling and independent variables are provided in Appendix J. To conclude this section we compute the explicit form of ζ for several common initialization procedures. Theorem 9 (Explicit form of the curvature penalty for common initializations). (a) Gaussian initialization. Assume thatW and B are independent,W∼N (0,σ2w) and B∼N (0,σ2b ). Then ζ is given by ζ(x)= 2σ 3 wσ 3 b π(σ2b+x 2σ2w) 2 . (b) Binary-uniform initialization. Assume that W and B are independent, W ∈ {−1, 1} and B∼U(−ab,ab) with ab≥L. Then ζ is constant on [−L,L]. (c) Uniform initialization. Assume that W and B are independent, W ∼ U(−aw, aw) and B∼U(−ab,ab) with abaw ≥L. Then ζ is constant on [−L,L]. The proof is provided in Appendix K. Theorem 9 (b) and (c) show that for certain distributions of (W,B), ζ is constant. In this case problem (19) is solved by the cubic spline interpolation of the data with natural boundary conditions (Ahlberg et al., 1967). The case of general ζ is solved by space adaptive natural cubic splines, which can be computed numerically by solving a linear system and theoretically in an RKHS formalism. We provide details in Appendix O. 6 CONCLUSION AND DISCUSSION We obtained a explicit description of the implicit bias of gradient descent for mean squared error regression with wide shallow ReLU networks. We presented a result for the univariate case and generalizations to multi-variate ReLU networks and networks with different activation functions. Our result can also help us characterize the training trajectory of gradient descent in function space. Our main result shows that the trained network outputs a function that interpolates the training data and has the minimum possible weighted 2-norm of the second derivative with respect to the input. This corresponds to an spatially adaptive interpolating spline. The space of interpolating splines is a linear space which has a dimension that is linear in the number of data points. Hence our result means that, even if the network has many parameters, the complexity of the trained functions will be adjusted to the number of data points. Interpolating splines have been studied in great detail in the literature and our result allows us to directly apply corresponding generalization results to the case of trained networks. This is related to approximation theory and characterizations for the number of samples and their spacing needed in order to approximate functions from a given smoothness class to a desired precision (Rieger and Zwicknagl, 2010; Wendland, 2004). Zhang et al. (2019) described the implicit bias of gradient descent as minimizing a RKHS norm from initialization. Our result can be regarded as making the RKHS norm explicit, thus providing an interpretable description of the bias in function space. Compared with Zhang et al. (2019), our results give a precise description of the role of the parameter initialization scheme, which determines the inverse curvature penalty function ζ . This gives us a rather good picture of how the initialization affects the implicit bias of gradient descent. This could be used in order to select a good initialization scheme. For instance, one could conduct a pre-assessment of the data to estimate the locations of the input space where the target function has a high curvature, and choose the parameter initialization accordingly. This is an interesting possibility to experiment with, based on our theoretical result. Our result can also be interpreted in combination with early stopping. The training trajectory is approximated by a smoothing spline, meaning that the network will filter out high frequencies which are usually associated to noise in the training data. This behaviour is sometimes referred to as a spectral bias (Rahaman et al., 2019).
1. What is the focus of the paper regarding implicit bias in neural networks? 2. What are the strengths and weaknesses of the paper's structure and content? 3. Do you have any questions or concerns regarding the paper's notation and terminology usage? 4. How does the reviewer assess the significance and novelty of the theoretical findings presented in the paper? 5. Are there any suggestions or recommendations for improving the paper's clarity and accessibility for a wider machine learning audience?
Review
Review Update after the rebuttal: I would like to thank the authors for the detailed reply and for addressing raised issues in the submission. I appreciate the authors' rationale, but the "standard" structure of papers makes it is easier to follow. The same for a conclusion, for the authors it may be reiterating the same ideas, but personally I found conclusions the best place where one can quickly get a flavour what has been done in the paper to assess whether it is actually worth spending time reading it in details. Also, they are helpful in cases like this when a reader (me) is outside of the research field of the paper. Regarding discussions, I appreciate that there is discussion for Theorem 1, but there are theorems and propositions stated in the formal language only which would benefit by being repeated in plain English. If they are just technical results required for the main proof, they may be moved to appendix then. Overall, I am increasing my score to reflect positive changes in the submission. There is a minor mistakes: In the first line of Conclusion: "obtained aN explicit" ========================================================================================================= The paper considers the implicit bias (i.e. why neural networks generalise well) in gradient descent learning of wide neural networks for the regression problem. The theoretical results are first stated for wide ReLU network for a 1D regression problem. They are then generalised for multivariate regression and different activation functions. It is difficult for me to assess the main content of the paper, as it is outside of my comfort zone. Therefore, this review would be mostly a feedback on overall structure and clarity of the paper. Strong points: apparent solid and large theoretical analysis addressing the important problem of understanding why the neural networks and gradient descent lead to such successful results in various domains Weak points: the paper is not friendly for outsiders of the particular research avenue. It is packed with content without too much space to discussion of presented results to the point that the conclusion section is missing in the paper the structure of the paper is very odd which leads to future references that appear only 3 pages afterwards (see details below) As mentioned I can't assess the main content of the paper, but with my educational guess I would recommend to reject the paper in the current version. Careful revision is required to make it a complete piece of work (add conclusion and discussion) and make it more accessible for wider machine learning audience. In particular, for the above mentioned weak points: I appreciate space constrains, but probably some of the material can be moved to supplementary completely to allow discussions of the main results more. The theoretical findings are mostly presented in the formal language and there is a lack of plain English discussion on what this means and how it affect the bigger picture. And the paper has to have conclusion section and shouldn't end abruptedly It seems that Section 2 and 3 should be swapped as I cannot see the reason why notations and problem formulation are presented after the main results: for those who are inside the field and can understand Section 2 without introduction would not need then introduction in Section 3 at all. And those who do need Section 3 would not understand Section 2 without it. This also leads to these inconvenient future references. E.g., referring in Theorem (1) to eq.(5) that appears 3 pages after that is a questionable choice. Some other suggestions/concerns: 1. ReLU is introduced in the last paragraph in Introduction, but used in the previous paragraph 2. NTK is defined well after it is used for the first time 3. The last sentence in Section 2 – there was nothing before about different training trajectories 4. Notation clash: in section 2 sigma denotes an activation function (different from ReLU) and in section 3 sigma denotes a parameter of initialisation distribution: Gaussian and uniform 5. After eq. (7): “We will use subscript i to index neurons and subscript t to index time”, i is also used for training points. 6. Strictly speaking n in equation between eq. (9) and (10) is not well-defined 7. ASI is not defined
ICLR
Title Implicit bias of gradient descent for mean squared error regression with wide neural networks Abstract We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smallest 2-norm of the weighted second derivative with respect to the input. The curvature penalty function 1/ζ is expressed in terms of the probability distribution that is utilized to initialize the network parameters, and we compute it explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. While similar results have been obtained in previous works, our analysis clarifies important details and allows us to obtain significant generalizations. In particular, the result generalizes to multivariate regression and different activation functions. Moreover, we show that the training trajectories are captured by trajectories of spatially adaptive smoothing splines with decreasing regularization strength. 1 INTRODUCTION Understanding why neural networks trained in the overparametrized regime and without explicit regularization generalize well in practice is an important problem (Zhang et al., 2017). Some form of capacity control different from network size must be at play (Neyshabur et al., 2014) and specifically the implicit bias of parameter optimization has been identified to play a key role (Neyshabur et al., 2017). By implicit bias we mean that among the many hypotheses that fit the training data, the algorithm selects one which satisfies additional properties that may be beneficial for its performance on new data. Jacot et al. (2018) and Lee et al. (2019) showed that the training dynamics of shallow and deep wide neural networks is well approximated by that of the linear Taylor approximation of the models at a suitable initialization. Chizat et al. (2019) observe that a model can converge to zero training loss while hardly varying its parameters, a phenomenon that can be attributed to scaling of the output weights and makes the model behave as its linearization around the initialization. Zhang et al. (2019) consider linearized models for regression problems and show that gradient flow finds the global minimum of the loss function which is closest to initialization in parameter space. This type of analysis connects with trajectory based analysis of neural networks (Saxe et al., 2014). Oymak and Soltanolkotabi (2019) studied the overparametrized neural networks directly and showed that gradient descent finds a global minimizer of the loss function which is close to the initialization. Towards interpreting parameters in function space, Savarese et al. (2019) and Ongie et al. (2020) studied infinite-width neural networks with parameters having bounded norm, in 1D and multi-dimensional input spaces, respectively. They showed that, under a standard parametrization, the complexity of the functions represented by the network, as measured by the 1-norm of the second derivative, can be controlled by the 2-norm of the parameters. Using these results, one can show that gradient descent with `2 weight penalty leads to simple functions. Sahs et al. (2020) relates function properties, such as breakpoint and slope distributions, to the distributions of the network parameters. The implicit bias of parameter optimization has been investigated in terms of the properties of the loss function at the points reached by different optimization methodologies (Keskar et al., 2017; Wu et al., 2017; Dinh et al., 2017). In terms of the solutions, Maennel et al. (2018) show that gradient flow for shallow networks with rectified linear units (ReLU) initialized close to zero quantizes features in a way that depends on the training data but not on the network size. Williams et al. (2019) obtained results for 1D regression contrasting the kernel and adaptive regimes. Soudry et al. (2018) show that in classification problems with separable data, gradient descent with linear networks converges to a maxmargin solution. Gunasekar et al. (2018b) present a result on implicit bias for deep linear convolutional networks, and Ji and Telgarsky (2019) study non-separable data. Chizat and Bach (2020) show that gradient flow for logistic regression with infinitely wide two-layer networks yields a max-margin classifier in a certain space. Gunasekar et al. (2018a) analyze the implicit bias of different optimization methods (natural gradient, steepest and mirror descent) for linear regression and separable linear classification problems, and obtain characterizations in terms of minimum norm or max-margin solutions. In this work, we study the implicit bias of gradient descent for regression problems. We focus on wide ReLU networks and describe the bias in function space. In Section 2 we provide settings and notation. We present our main results in Section 3, and develop the main theory in Sections 4 and 5. In the interest of a concise presentation, technical proofs and extended discussions are deferred to appendices. 2 NOTATION AND PROBLEM SETUP Consider a fully connected network with d inputs, one hidden layer of width n, and a single output. For any given input x∈Rd, the output of the network is f(x,θ)= n∑ i=1 W (2) i φ(〈W (1) i ,x〉+b (1) i )+b (2), (1) where φ is a point-wise activation function,W (1)∈Rn×d,W (2)∈Rn, b(1)∈Rn and b(2)∈R are the weights and biases of layer l=1,2. We write θ=vec(∪2l=1{W (l),b(l)}) for the vector of all network parameters. These parameters are initialized by independent samples of pre-specified random variables W and B in the following way: W (1) i,j d = √ 1/dW, b(1)i d = √ 1/d B W (2) i d = √ 1/nW, b(2) d= √ 1/n B. (2) More generally, we will also allow weight-bias pairs to be sampled from a joint distribution of (W,B) which we only assume to be sub-Gaussian. In the analysis of Jacot et al. (2018); Lee et al. (2019), W and B are Gaussian N (0,σ2). In the default initialization of PyTorch,W and B have uniform distribution U(−σ,σ). The setting (1) is known as the standard parametrization. Some works (Jacot et al., 2018; Lee et al., 2019) utilize the so-called NTK parametrization, where the factor √ 1/n is carried outside of the trainable parameter. If we fix the learning rate for all parameters, gradient descent leads to different trajectories under these two parametrizations. Our results are presented for the standard parametrization. Details on this in Appendix C.3. We consider a regression problem for data {(xj , yj)}Mj=1 with inputs X = {xj}Mj=1 and outputs Y = {yj}Mj=1. For a loss function ` : R × R → R, the empirical risk of our function is L(θ)= ∑M j=1`(f(xj ,θ),yj). We use full batch gradient descent with a fixed learning rate η to minimize L(θ). Writing θt for the parameter at time t, and θ0 for the initialization, this defines an iteration θt+1 =θt−η∇L(θ)=θt−η∇θf(X ,θt)T∇f(X ,θt)L, (3) where f(X ,θt)=[f(x1,θt),...,f(xM ,θt)]T is the vector of network outputs for all training inputs, and ∇f(X ,θt)L is the gradient of the loss with respect to the model outputs. We will use subscript i to index neurons and subscript t to index time. Let Θ̂n be the empirical neural tangent kernel (NTK) of the standard parametrization at time 0, which is the matrix Θ̂n= 1n∇θf(X ,θ0)∇θf(X ,θ0) T . 3 MAIN RESULTS AND DISCUSSION We obtain a description of the implicit bias in function space when applying gradient descent to regression problems with wide ReLU neural networks. We prove the following result in Appendix D. An interpretation of the result and generalizations are given further below. Theorem 1 (Implicit bias of gradient descent in wide ReLU networks). Consider a feedforward network with a single input unit, a hidden layer of n rectified linear units, and a single linear output unit. Assume standard parametrization (1) and that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W,B) (2) with joint density pW,B. Then, for any finite data set {(xj ,yj)}Mj=1 and sufficiently large n there exist constant u and v so that optimization of the mean square error on the adjusted training data {(xj , yj − uxj − v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which the output function f(x,θ∗) (1) attains zero training error. Furthermore, letting ζ(x)= ∫ R|W | 3pW,B(W,−Wx) dW and S=supp(ζ)∩[minixj ,maxixj ], we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈S (the 2-norm over S) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C2(S) ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx subject to g(xj)=yj−uxj−v, j=1,...,M. (4) Interpretation An intuitive interpretation of the theorem is that at those regions of the input space where ζ is smaller, we can expect the difference between the functions after and before training to have a small curvature. We may call ρ=1/ζ a curvature penalty function. The bias induced from initialization is expressed explicitly. We note that under suitable asymmetric parameter initialization (see Appendix C.2), it is possible to achieve f(·,θ0)≡0. Then the regularization is on the curvature of the output function itself. In Theorem 9 we obtain the explicit form of ζ for various common parameter initialization procedures. In particular, when the parameters are initialized independently from a uniform distribution on a finite interval, ζ is constant and the problem is solved by the natural cubic spline interpolation of the data. The adjustment of the training data simply accounts for the fact that second derivatives define a function only up to linear terms. In practice we can use the coefficients a and b of linear regression yj = axj + b+ j , j = 1, ... ,M , and set the adjusted data as {(xj , j)}Mj=1. Although Theorem 1 describes the gradient descent training with the linearly adjusted data, this result can also approximately describe training with the original training data. Further details are provided in Appendix L. We illustrate Theorem 1 numerically in Figure 1 and more extensively in Appendix A. In close agreement with the theory, the solution to the variational problem captures the solution of gradient descent training uniformly with error of order n−1/2. To illustrate the effect of the curvature penalty function, Figure 1 also shows the solutions to the variational problem for different values of ζ corresponding to different initialization distributions. We see that at input points where ζ is small / peaks strongly, the solution function tends to have a lower curvature / be able to use a higher curvature in order to fit the data. With the presented bias description we can formulate heuristics for parameter initialization either to ease optimization or also to induce specific smoothness priors on the solutions. In particular, by Proposition 8 any curvature penalty 1/ζ can be implemented by an appropriate choice of the parameter initialization distribution. By our analysis, the effective capacity of the model, understood as the set of possible output functions after training, is adapted to the sizeM of the training dataset and is well captured by a space of cubic splines relative to the initial function. This is a space with dimension of orderM independently of the number of parameters of the network. Strategy of the proof In Section 4, we observe that for a linearized model gradient descent with sufficiently small step size finds the minimizer of the training objective which is closest to the initial parameter (similar to a result by Zhang et al., 2019). Then Theorem 4 shows that the training dynamics of the linearization of a wide network is well approximated in parameter and function space by that of a lower dimensional linear model which trains only the output weights. This property is sometimes taken for granted and we show that it holds for the standard parametrization, although it does not hold for the NTK parametrization (defined in Appendix C.3), which leads to the adaptive regime. In Section 5, for networks with a single input and a single layer of ReLUs, we relate the implicit bias of gradient descent in parameter space to an alternative optimization problem. In Theorem 5 we show that the solution of this problem has a well defined limit as the width of the network tends to infinity, which allows us to obtain a variational formulation. In Theorem 6 we translate the description of the bias from parameter space to function space. In Theorem 9 we provide explicit descriptions of the weight function for various common initialization procedures. Finally, we can utilize recent results bounding the difference in function space of the solutions obtained from training a wide network and its linearization (Lee et al., 2019, Theorem H.1). Generalizations Theorem 4 has several generalizations elaborated in Appendix P. For multivariate regression, we have the following theorem. Theorem 2 (Multivariate regression). Use the same network setting as in Theorem 1 except that the number of input units changes to d. Assume that for each hidden unit the input weight and bias are initialized from a sub-Gaussian (W ,B) where W is a d-dimensional random vector and B is a random variable. Then, for any finite data set {(xj ,yj)}Mi=1 and sufficiently large n there exist constant vector u and constant v so that optimization of the mean square error on the adjusted training data {(xj ,yj−〈u,xj〉−v)}Mj=1 by full-batch gradient descent with sufficiently small step size converges to a parameter θ∗ for which f(x,θ∗) attains zero training error. Furthermore, letU=‖W‖2, V = W/‖W‖2, C =−B/‖W‖2 and ζ(V ,c) = pV,C(V ,c)E(U2|V = V ,C = c) where pV,C is the joint density of (V ,C). Then we have ‖f(x,θ∗)−g∗(x)‖2 =O(n− 1 2 ),x∈Rd (the 2-norm over Rd) with high probability over the random initialization θ0, where g∗ solves following variational problem: min g∈C(Rd) ∫ supp(ζ) ( R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c) )2 ζ(V ,c) dV dc subject to g(xj)=yj , j=1,...,M, R{(−∆)(d+1)/2(g−f(·,θ0))}(V ,c)=0, (V ,c) 6∈supp(ζ). (5) Here R is the Radon transform which is defined by R{f}(ω, b) := ∫ 〈ω,x〉=b f(x)ds(x), and the power of the negative Laplacian (−∆)(d+1)/2 is the operator defined in Fourier domain by ̂(−∆)(d+1)/2f(ξ)=‖ξ‖d+1f̂(ξ). For different activation functions, we have the following corollary. Corollary 3 (Different activation functions). Use the same setting as in Theorem 1 except that we use the activation function φ instead of ReLU. Suppose that φ is a Green’s function of a linear operator L, i.e. Lφ= δ, where δ denotes the Dirac delta function. Assume that the activation function φ is homogeneous of degree k, i.e. φ(ax)=akφ(x) for all a>0. Then we can find a function p satisfying Lp≡ 0 and adjust training data {(xj ,yj)}Mj=1 to {(xj ,yj−p(xj)}Mj=1. After that, the statement in Theorem 1 holds with the variational problem (4) changed to min g∈C2(S) ∫ S 1 ζ(x) [L(g(x)−f(x,θ0))]2 dx s.t. g(xj)=yj−p(xj), j=1,...,M, (6) where ζ(x)=pC(x)E(W2k|C=x) and S=supp(ζ)∩[minixi,maxixi]. Moreover, our method allows us to describe the optimization trajectory in function space (see Appendix N). If we substitute constraints g(xj)=yj in (4) by a quadratic term 1λ 1 M ∑M j=1(g(xj)−yj)2 added to the objective, we obtain the variational problem for a so-called spatially adaptive smoothing spline (see Abramovich and Steinberg, 1996; Pintore et al., 2006). This problem can be solved explicitly and can be shown to approximate early stopping. To be more specific, the solution to following optimization problem approximates the output function of the network after gradient descent training for t steps with learning rate η̄/n: min g∈C2(S) M∑ j=1 [g(xj)−yj ]2+ 1 η̄t ∫ S 1 ζ(x) (g′′(x)−f ′′(x,θ0))2 dx. (7) Related works Zhang et al. (2019) described the implicit bias of gradient descent in the kernel regime as minimizing a kernel norm from initialization, subject to fitting the training data. Our result can be regarded as making the kernel norm explicit, thus providing an interpretable description of the bias in function space and further illuminating the role of the parameter initialization procedure. We prove the equivalence in Appendix M. Savarese et al. (2019) showed that infinite-width networks with 2-norm weight regularization represent functions with smallest 1-norm of the second derivative, an example of which are linear splines. We discuss this in Appendix C.4. A recent preprint further develops this direction for two-layer networks with certain activation functions that interpolate data while minimizing a weight norm (Parhi and Nowak, 2019). In contrast, our result characterizes the solutions of training from a given initialization without explicit regularization, which turn out to minimize a weighted 2-norm of the second derivative and hence correspond to cubic splines. In finishing this work we became aware of a recent preprint (Heiss et al., 2019) which discusses ridge weight penalty, adaptive splines, and early stopping for one-input ReLU networks training only the output layer. Williams et al. (2019) showed a similar result in the kernel regime for shallow ReLU networks where they train only the second layer and from zero initialization. In contrast, we consider the initialization of the second layer and show that the difference from the initial output function is implicitly regularized by gradient descent. We show the result of training both layers and prove that it can be approximated by training only the second layer in Theorem 4. In addition, we give the explicit form of ζ in Theorem 9, while the ζ given by Williams et al. (2019) has a minor error because of a typo in their computation. Most importantly, our statement can be generalized to multivariate regression, different activation functions, training trajectories. 4 WIDE NETWORKS AND PARAMETER SPACE 4.1 IMPLICIT BIAS IN PARAMETER SPACE FOR A LINEARIZED MODEL In this section we describe how training a linearized network or a wide network by gradient descent leads to solutions that are biased, having parameter values close to the values at initialization. First, we consider the following linearized model: f lin(x,ω)=f(x,θ0)+∇θf(x,θ0)(ω−θ0). (8) We write ω for the parameter of the linearized model, in order to distinguish it from the parameter of the nonlinearized model. The empirical loss of the linearized model is defined by Llin(ω)= ∑M j=1`(f lin(xj ,ω),yj). The gradient descent iteration for the linearized model is given by ω0 =θ0, ωt+1 =ωt−η∇θf(X ,θ0)T∇f lin(X ,ωt)L lin. (9) Next, we consider wide neural networks. According to Lee et al. (2019, Theorem H.1), sup t ‖f lin(x,ωt)−f(x,θt)‖2 =O(n− 1 2 ) with arbitrarily high probability. So gradient descent training of a wide network or of the linearized model give similar trajectories and solutions in function space. Both fit the training data perfectly, meaning f lin(X ,ω∞)=f(X ,θ∞)=Y , and are also approximately equal outside the training data. Under the assumption that rank(∇θf(X ,θ0)) =M , the gradient descent iterations (9) converge to the unique global minimum that is closest to initialization (Gunasekar et al., 2018a; Zhang et al., 2019), which is the solution of following constrained optimization problem (further details and remarks are provided in Appendix E): min ω ‖ω−θ0‖2 s.t. f lin(X ,ω)=Y. (10) 4.2 TRAINING ONLY THE OUTPUT LAYER APPROXIMATES TRAINING ALL PARAMETERS From now on we consider networks with a single hidden layer of n ReLUs and a linear output f(x,θ) = ∑n i=1W (2) i [W (1) i x+ b (1) i ]+ + b (2). We show that the functions and parameter vectors obtained by training the linearized model are close to those obtained by training only the output layer. Hence, by the arguments of the previous section, training all parameters of a wide network or training only the output layer gives similar functions. Let θ0 = vec(W (1) ,b (1) ,W (2) ,b (2) ) be the parameter at initialization so that f lin(·,θ0) = f(·,θ0). After training the linearized network let the parameter be ω∞ = vec(Ŵ (1),b̂(1),Ŵ (2),b̂(2)). Using initialization (2), with probability arbitrarily close to 1,W (1) i ,b (1) i =O(1) andW (2) i ,b (2) =O(n− 1 2 ).1 Therefore, writingH for the Heaviside function, we have ∇ W (1) i ,b (1) i f(x,θ0)= [ W (2) i H(W (1) i x+b (1) )·x,W (2)i H(W (1) i x+b (1) i ) ] =O(n− 1 2 ), ∇ W (2) i ,b (2)f(x,θ0)= [ [W (1) i x+b (1) i ]+ ,1 ] =O(1). (11) So when n is large, if we use gradient descent with a constant learning rate for all parameters, then the changes ofW (1), b(1), b(2) are negligible compared with the changes ofW (2). So approximately we can train just the output weights,W (2)i ,i=1,...,n, and fix all other parameters. This corresponds to a smaller linear model. Let ω̃t = vec(W (1) t ,b (1) t ,W̃ (2) t ,b (2) t ) be the parameter at time t under the update rule whereW (1) ,b (1) , b (2) are kept fixed at their initial values, and W̃ (2) 0 =W (2) , W̃ (2) t+1 =W̃ (2) t −η∇W (2)Llin(ω̃t). (12) Let ω̃∞ = limt→∞ ω̃t. By the above discussion, we expect that f lin(x,ω̃∞) is close to f lin(x,ω∞). In fact, we prove the following for the MSE loss. The proof and further remarks are provided in Appendix F. We relate Theorem 4 to training a wide network in Appendix G. Theorem 4 (Training only output weights vs linearized network). Consider a finite data set {(xi,yi)}Mi=1. Assume that (1) we use the MSE loss `(ŷ,y) = 12‖ŷ−y‖ 2 2; (2) infnλmin(Θ̂n)> 0. Let ωt denote the parameters of the linearized model at time t when we train all parameters using (9), and let ω̃t denote the parameters at time t when we only train weights of the output layer using (12). If we use the same learning rate η in these two training processes and η < 2 nλmax(Θ̂n) , then for any x∈R, with probability arbitrarily close to 1 over the random initialization (2), sup t |f lin(x,ω̃t)−f lin(x,ωt)|=O(n−1), as n→∞. (13) Moreover, in terms of the parameter trajectories we have supt ‖W (1) t − Ŵ (1) t ‖2 = O(n−1), supt‖b (1) t −b̂ (1) t ‖2 =O(n−1), supt‖W̃ (2) t −Ŵ (2) t ‖2 =O(n−3/2), supt‖b (2) t −b̂ (2) t ‖=O(n−1). In view of the arguments in this section, in the next sections we will focus on training only the output weights and understanding the corresponding solution functions. 1More precisely, for any δ>0, ∃C, s.t. with prob. 1−δ, |W (2)i |,|b (2)|≤Cn−1/2 and |W (1)i |,|b (1) i |≤C. 5 GRADIENT DESCENT LEADS TO SIMPLE FUNCTIONS In this section we provide a function space characterization of the implicit bias previously described in parameter space. According to (10), gradient descent training of the output weights (12) achieves zero loss, f lin(xj ,ω̃∞)−f lin(xj ,θ0)= ∑n i=1(W̃ (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M , with minimum ‖W̃ (2)−W (2)‖22. Hence gradient descent is actually solving min W (2) ‖W (2)−W (2)‖22 s.t. n∑ i=1 (W (2) i −W (2) i )[W (1) i xj+bi]+ =yj−f(xj ,θ0), j=1,...,M. (14) To simplify the presentation, in the following we let f lin(x,θ0) ≡ 0 by using the ASI trick (see Appendix C.2). The analysis still goes through without this. 5.1 INFINITE WIDTH LIMIT We reformulate problem (14) in a way that allows us to consider the limit of infinitely wide networks, with n → ∞, and obtain a deterministic counterpart, analogous to the convergence of the NTK. Let µn denote the empirical distribution of the samples (W (1) i , bi) n i=1, so that µn(A) = 1 n ∑n i=11A ( (W (1) i ,bi) ) . Here 1A is the indicator function for measurable subsets A in R2. We further consider a function αn : R2→R whose value encodes the difference of the output weight from its initialization for a hidden unit with input weight and bias given by the argument, αn(W (1) i ,bi)=n(W (2) i −W (2) i ). Then (14) with ASI can be rewritten as min αn∈C(R2) ∫ R2 α2n(W (1),b) dµn(W (1),b) s.t. ∫ R2 αn(W (1),b)[W (1)xj+b]+ dµn(W (1),b)=yj , (15) where j ranges from 1 toM . Here we minimize over functions αn inC(R2), but since only the values on (W (1)i ,bi) n i=1 are taken into account, we can take any continuous interpolation of αn(W (1) i ,bi), i=1,...,n. Now we can consider the infinite width limit. Let µ be the probability measure of (W,B). We obtain a continuous version of problem (15) by substituting µ for µn. Since we know that µn weakly converges to µ, we prove that in fact the solution of problem (15) converges to the solution of the continuous problem, which is formulated in the following theorem. Details in Appendix H. Theorem 5. Let (W (1)i ,bi)ni=1 be i.i.d. samples from a pair (W,B) of random variables with finite fourth moment. Suppose µn is the empirical distribution of (W (1) i ,bi) n i=1 and αn(W (1),b) is the solution of (15). Let α(W (1),b) be the solution of the continuous problem with µ in place of µn. Then for any bounded [−L,L], supx∈[−L,L]|gn(x,αn)−g(x,α)|=O(n−1/2) with high probability, where gn(x,αn)= ∫ R2αn(W (1),b)[W (1)x+b]+ dµn(W (1),b) is the function represented by a network with n hidden neurons after training, and g(x,α)= ∫ R2α(W (1),b)[W (1)x+b]+ dµ(W (1),b) is the function represented by the infinite-width network. 5.2 FUNCTION SPACE DESCRIPTION OF THE IMPLICIT BIAS Next we connect the problem from the previous section to second derivatives by first rewriting it in terms of breakpoints. Consider the breakpoint c=−b/W (1) of a ReLU with weightW (1) and bias b. We define a corresponding random variable C=−B/W and let ν denote the distribution of (W,C).2 Then with γ(W (1),c)=α(W (1),−cW (1)) the continuous version of (15) is equivalently given as min γ∈C(R2) ∫ R2 γ2(W (1),c) dν(W (1),c) s.t. ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , (16) where j ranges from 1 toM . Let νC denote the distribution of C=−B/W , and νW|C=c the conditional distribution of W given C = c. Suppose νC has support supp(νC) and a density function pC(c). 2Here we assume that P(W=0)=0 so that the random variable C is well defined. It is not an important restriction, since neurons with weightW (1)=0 give constant functions that can be absorbed in the bias of output layer. Let g(x,γ) = ∫ R2 γ(W (1), c)[W (1)(x− c)]+ dν(W (1), c), which again corresponds to the output function of the network. Then, the second derivative g′′ with respect to x (see Appendix I) satisfies g′′(x,γ)=pC(x) ∫ Rγ(W (1),x) ∣∣W (1)∣∣ dνW|C=x(W (1)). Thus γ(W (1),c) is closely related to g′′(x,γ) and we can try to express (16) in terms of g′′(x,γ). Since g′′(x,γ) determines g(x,γ) only up to linear functions, we consider the following problem: min γ∈C(R2),u∈R,v∈R ∫ R2 γ2(W (1),c) dν(W (1),c) subject to uxj+v+ ∫ R2 γ(W (1),c)[W (1)(xj−c)]+ dν(W (1),c)=yj , j=1,...,M. (17) Here u,v are not included in the cost. They add a linear function to the output of the neural network. If u and v in the solution of (17) are small, then the solution is close to the solution of (16). Ongie et al. (2020) also use this trick to simplify the characterization of neural networks in function space. Next we study the solution of (17) in function space. This is our main technical result. Theorem 6 (Implicit bias in function space). Assume W and B are random variables with P(W = 0) = 0, and let C =−B/W . Let ν denote the probability distribution of (W,C). Suppose (γ,u,v) is the solution of (17), and consider the corresponding output function g(x,(γ,u,v))=ux+v+ ∫ R2 γ(W (1),c)[W (1)(x−c)]+ dν(W (1),c). (18) Let νC denote the marginal distribution of C and assume it has a density function pC . Let E(W2|C) denote the conditional expectation ofW2 given C. Consider the function ζ(x)=pC(x)E(W2|C=x). Assume that training data xi∈supp(ζ), i=1,...,m. Consider the set S=supp(ζ)∩[minixi,maxixi]. Then g(x,(γ,u,v)) satisfies g′′(x,(γ,u,v))=0 for x 6∈S and for x∈S it is the solution of the following problem: min h∈C2(S) ∫ S (h′′(x))2 ζ(x) dx s.t. h(xj)=yj , j=1,...,m. (19) The proof is provided in Appendix I, where we also present the corresponding statement without ASI. We study the explicit form of this function in the next section. 5.3 EXPLICIT FORM OF THE CURVATURE PENALTY FUNCTION Proposition 7. Let pW,B denote the joint density function of (W,B) and let C =−B/W so that pC is the breakpoint density. Then ζ(x)=E(W 2|C=x)pC(x)= ∫ R|W | 3pW,B(W,−Wx) dW . The proof is presented in Appendix J. If we allow the initial weight and biases to be sampled from a suitable joint distribution, we can make the curvature penalty ρ=1/ζ arbitrary. Proposition 8 (Constructing any curvature penalty). Given any function % : R→ R>0, satisfying Z = ∫ R 1 % <∞, if we set the density of C as pC(x) = 1 Z 1 %(x) and make W independent of C with non-vanishing second moment, then (E(W 2|C=x)pC(x))−1 =(E(W 2)pC(x))−1∝%(x), x∈R. Further remarks on sampling and independent variables are provided in Appendix J. To conclude this section we compute the explicit form of ζ for several common initialization procedures. Theorem 9 (Explicit form of the curvature penalty for common initializations). (a) Gaussian initialization. Assume thatW and B are independent,W∼N (0,σ2w) and B∼N (0,σ2b ). Then ζ is given by ζ(x)= 2σ 3 wσ 3 b π(σ2b+x 2σ2w) 2 . (b) Binary-uniform initialization. Assume that W and B are independent, W ∈ {−1, 1} and B∼U(−ab,ab) with ab≥L. Then ζ is constant on [−L,L]. (c) Uniform initialization. Assume that W and B are independent, W ∼ U(−aw, aw) and B∼U(−ab,ab) with abaw ≥L. Then ζ is constant on [−L,L]. The proof is provided in Appendix K. Theorem 9 (b) and (c) show that for certain distributions of (W,B), ζ is constant. In this case problem (19) is solved by the cubic spline interpolation of the data with natural boundary conditions (Ahlberg et al., 1967). The case of general ζ is solved by space adaptive natural cubic splines, which can be computed numerically by solving a linear system and theoretically in an RKHS formalism. We provide details in Appendix O. 6 CONCLUSION AND DISCUSSION We obtained a explicit description of the implicit bias of gradient descent for mean squared error regression with wide shallow ReLU networks. We presented a result for the univariate case and generalizations to multi-variate ReLU networks and networks with different activation functions. Our result can also help us characterize the training trajectory of gradient descent in function space. Our main result shows that the trained network outputs a function that interpolates the training data and has the minimum possible weighted 2-norm of the second derivative with respect to the input. This corresponds to an spatially adaptive interpolating spline. The space of interpolating splines is a linear space which has a dimension that is linear in the number of data points. Hence our result means that, even if the network has many parameters, the complexity of the trained functions will be adjusted to the number of data points. Interpolating splines have been studied in great detail in the literature and our result allows us to directly apply corresponding generalization results to the case of trained networks. This is related to approximation theory and characterizations for the number of samples and their spacing needed in order to approximate functions from a given smoothness class to a desired precision (Rieger and Zwicknagl, 2010; Wendland, 2004). Zhang et al. (2019) described the implicit bias of gradient descent as minimizing a RKHS norm from initialization. Our result can be regarded as making the RKHS norm explicit, thus providing an interpretable description of the bias in function space. Compared with Zhang et al. (2019), our results give a precise description of the role of the parameter initialization scheme, which determines the inverse curvature penalty function ζ . This gives us a rather good picture of how the initialization affects the implicit bias of gradient descent. This could be used in order to select a good initialization scheme. For instance, one could conduct a pre-assessment of the data to estimate the locations of the input space where the target function has a high curvature, and choose the parameter initialization accordingly. This is an interesting possibility to experiment with, based on our theoretical result. Our result can also be interpreted in combination with early stopping. The training trajectory is approximated by a smoothing spline, meaning that the network will filter out high frequencies which are usually associated to noise in the training data. This behaviour is sometimes referred to as a spectral bias (Rahaman et al., 2019).
1. What is the main contribution of the paper regarding the implicit bias of gradient descent-based learning? 2. What are the strengths and weaknesses of the paper's analysis and organization? 3. Do you have any concerns about the proposed analysis being incremental and failing to provide novel insights on the implicit bias of gradient descent? 4. How does the number of training samples come into play in the paper's analysis? 5. Can similar behaviors be empirically observed beyond the single-hidden-layer model without proof? 6. How should we choose the initialization scheme in different problems? 7. Why did the authors present the long and complex theorems before introducing the models and notations? 8. Why did the authors present the model and notations in Section 3.1 and 3.2 in the general context of L-hidden-layer neural networks when none of the technical results are concerned with this "deep" case? 9. Is there a claim in the paper that the uniqueness of the global minimum is ensured? If yes, where is it proved in the appendix? 10. Why is the change in b(2) also negligible compared to that of W(2)?
Review
Review Summary: In this article, the authors characterized the implicit bias of gradient descent-based learning in the setting of wide single-hidden-layer neural networks with ReLU activation. More precisely, it was shown in Theorem 1 that the trained network output is "close" to that of the zero-training-error solution that is the "closest" to initialization when the number of neurons n is large. In particular, such a "closest" solution is defined via the so-called curvature penalty function 1 / ζ that depends on the random initialization of the parameters. Theses results were shown in Theorem 1 to hold for scalar input with ReLU activation, and then generalized to multi-dimensional input with ReLU nonlinearity in Theorem 2 and to general homogeneous activation function in Corollary 3. Strong points: This paper provides a precise characterization of the implicit bias of (full-batch) gradient descent method in training single-hidden-layer NN, by studying the resulting solution in the setting of large network width. The major contribution of this work, I believe, is to provide explicit characterization of the impact of the distribution of the random initialization on the resulting solution, which is then connected to cubic spline interpolation. Weak points: My first concern is that the proposed analysis seems somewhat incremental (compared to previous efforts discussed in P4) and fails to provide sufficiently novel insights on the implicit bias of gradient descent: it is good to have the explicit form as in (1) that depends on the random initialization and the second derivative, and I believe it worth more than a single paragraph of discussion and a single figure to illustrate its practical implications, e.g., how does the number of training sample M come into play? Can similar behaviors be empirically observed beyond the single-hidden-layer model, even without proof? how should we choose the initialization scheme in different problems? My second concern is the organization of the paper: I do not understand why the authors have chosen to present the long and complex theorems in Section 2 before introducing the models and notations in Section 3, this makes the paper, at least for me, much harder to read and follow. Also, I do not understand why the authors have chosen to present the model and notations in Sec 3.1 and 3.2 in the general context of L -hidden-layer neural networks: none of the technical results are concerned with this "deep" case and I personally find this only creates unnecessary confusion. Recommendation: I find this paper borderline, and according to the weak points mentioned above, I'm more leaning toward a reject. Detailed comments: abstract: "of the second derivative": the second derivative of what with respect to what? Theorem 1: "for which f ( x , θ ∗ ) attains zero training error": f ( ⋅ ) is not yet defined, and it would also be helpful to recall the definition of C 2 ( S ) in (1). Below (9): it would be helpful to clarify the conditions under which Lee et al. (2019, Theorem H.1) hold and if they are compatible with the assumptions for instance in Theorem 1 of the present article. Above (10): "gradient descent training of a wide network or of the linearized model giveS similar trajectories and solutions in function space": the argument on the "trajectory" is not reflected in the last equation in P5, which only characterizes the network output. Above (10): "converge to the unique global minimum": how is the uniqueness ensured here? Above (11) footnote 1: is this a claim? If yes, it would be helpful to point out in which Section of the appendix is this proved. After (11): not sure to understand why the change in b ( 2 ) is also negligible compared to that of W ( 2 ) . Theorem 4: it would be helpful to clarify whether λ max ( Θ ^ n ) is of order O ( 1 ) with respect to n , that is, does η < 2 n λ max ( Θ ^ n ) mean the the step size should scale like O ( n − 1 ) in the n → ∞ limit?
ICLR
Title Self-supervised Representation Learning with Relative Predictive Coding Abstract This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1. 1 INTRODUCTION Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivière et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019). Prior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods. The first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, SimCLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC 1Project page: https://github.com/martinmamql/relative_predictive_coding is the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020). This paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a `2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019). To empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015) and speech recognition on LibriSpeech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance. 2 PROPOSED METHOD This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix. Notation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X ). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) ∼ PXY with PXY being the joint distribution of X × Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) ∼ PXPY with PXPY being the product of marginal distributions overX×Y . Last, we define f ∈ F for F being any class of functions f : X ×Y → R. 2.1 PRELIMINARY Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair of representations (x, y) from their joint distribution ((x, y) ∼ PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) ∼ PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1. We instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY ‖PXPY ) with DKL referring to the KL-divergence: JCPC(X,Y ) := sup f∈F E(x,y1)∼PXY ,{yj}Nj=2∼PY [ log ef(x,y1) 1 N ∑N j=1 e f(x,yj) ] . (1) Then, Oord et al. (2018) presents to maximize JCPC(X,Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine(·) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant. The category of modeling DKL(PXY ‖PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X,Y ) = JNWJ(X,Y ) = DKL(PXY ‖PXPY ). The other divergence measurements considered in prior work are DJS(PXY ‖PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY ‖PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY ‖PXPY ) is the Jensen-Shannon f-GAN objective( JJS (Nowozin et al., 2016; Hjelm et al., 2018) ) , where JJS(X,Y ) = 2 ( DJS(PXY ‖PXPY ) − log 2 ) .2 The instance of modeling DWass(PXY ‖PXPY ) is the Wasserstein Predictive Coding( JWPC (Ozair et al., 2019) ) , where JWPC(X,Y ) modifies JCPC(X,Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X × Y) to R, and thus FL ⊂ F . Ozair et al. (2019) shows that JWPC(X,Y ) is the lower bound of bothDKL(PXY ‖PXPY ) andDWass(PXY ‖PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks. After discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, · · · , yN}, with (x, y1) ∼ PXY , (x, yj>1) ∼ PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}Nj=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider 2JJS(X,Y ) achieves its supreme value when f∗(x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f∗(x, y) into JJS(X,Y ), we can conclude JJS(X,Y ) = 2(DJS(PXY ‖PXPY )− log 2). the instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1- Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/100 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). 2.2 RELATIVE PREDICTIVE CODING In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above: JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] , (2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a `2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma: Lemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x,y)−αβ r(x,y)+γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Lemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as Ĵ m,n RPC: Definition 1 (Ĵm,nRPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Then, Ĵm,nRPC = supfθ∈FΘ 1 n ∑n i=1 fθ(xi, yi)− 1 m ∑m j=1 αfθ(x ′ j , y ′ j)− 1n ∑n i=1 β 2 f 2 θ (xi, yi)− 1m ∑m j=1 γ 2 f 2 θ (x ′ j , y ′ j). Proposition 1 (Boundedness of Ĵm,nRPC, informal) 0 ≤ JRPC ≤ 1 2β + α2 2γ . Then, with probability at least 1− δ, |JRPC − Ĵm,nRPC| = O( √ d+log (1/δ) n′ ), where n ′ = min {n,m}. Proposition 2 (Variance of Ĵm,nRPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[Ĵm,nRPC] = O ( c1 n + c2 m ) . From the two propositions, whenm and n are large, i.e., the sample sizes are large, Ĵm,nRPC is bounded, and its variance vanishes to 0. First, the boundedness of Ĵm,nRPC suggests Ĵ m,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than logN (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of Ĵm,nRPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY ‖PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X,Y ) as JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 − 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ. 3 EXPERIMENTS We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivière et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix. Datasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015). CIFAR-10/-100 and ImageNet contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider LibriSpeech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16kHz English speech from 251 speakers with 41 types of phonemes. Training and Evaluation Details. For the vision experiments, we follow the setup from SimCLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivière et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. . Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) ∼ PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. . Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. . Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data’s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data’s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor- malization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC. 3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (ResNet with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC. Regarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks. 3.2 TRAINING STABILITY We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the SimCLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to NaN and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better. 3.3 MINIBATCH SIZE SENSITIVITY We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train SimCLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivière et al. (2020) on LibriSpeech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings. 3.4 EFFECT OF RELATIVE PARAMETERS We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train SimCLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α ∈ {0, 0.001, 1.0}, β ∈ {0, 0.001, 1.0}, γ ∈ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of `2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on ImageNet (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10. 3.5 RELATION TO MUTUAL INFORMATION ESTIMATION The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X;Y ) = DKL(PXY ‖PXPY ). Lemma 1 states that given optimal solution f∗(x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1−βf∗(x,y) − γ β . We can empirically estimate r̂(x, y) from the estimated f̂(x, y) via this transformation, and use r̂(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X;Y ) ≈ 1n ∑n i=1 log r̂(xi, yi) with (xi, yi) ∼ P⊗nX,Y , where P ⊗n X,Y is the uniformly sampled empirical distribution of PX,Y . We follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X;Y ). Then, we perform a cubic transformation on y so that y 7→ y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X;Y ) = −10log (1 − ρ2). The models are trained on 20, 000 steps with I(X;Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (DoE) (McAllester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. DoE (McAllester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the DoE method. We would like to highlight our method’s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information. 4 RELATED WORK As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input’s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from its upper part (ImageGPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem. Theoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations. Our work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P ‖Q), these approaches consider D(P ‖αP + (1 − α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P ‖Q) = 0.5DKL(P ‖ 0.5P + 0.5Q) + 0.5DKL(Q ‖ 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations. 5 CONCLUSION In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning. ACKNOWLEDGEMENT This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIA’s GPU support and Cloud TPU support from Google’s TensorFlow Research Cloud (TFRC). A APPENDIX A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] and r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x, y)− α β r(x, y) + γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Proof: The second-order functional derivative of the objective is −βdPX,Y − γdPXPY , which is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative ∂JRPC∂m and set it to zero: dPX,Y − α · dPXPY − β · f(x, y) · dPX,Y − γ · f(x, y) · dPXPY = 0. We then get f∗(x, y) = dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY = p(x, y)− αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y)− α βr(x, y) + γ . Since 0 ≤ r(x, y) ≤ ∞, we have −αγ ≤ r(x,y)−α βr(x,y)+γ ≤ 1 β . Hence, ∀β 6= 0, γ 6= 0, f∗(x, y) := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . A.2 RELATION BETWEEN JRPC AND Dχ2 In this subsection, we aim to show the following: 1) Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1; and 2) JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] by having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Lemma 3 Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)]− 1 Proof: By definition (Nielsen & Nock, 2013), Dχ2(PXY ‖PXPY ) = ∫ (dPXY )2 dPXPY − 1 = ∫ ( dPXY dPXPY )2 dPXPY − 1 = ∫ ( p(x, y) p(x)p(y) )2 dPXPY − 1 = ∫ r2(x, y)dPXPY − 1 = EPXPY [r2(x, y)]− 1. Lemma 4 Defining P ′ = ββ+γPXY + γ β+γPXPY as a mixture distribution of PXY and PXPY , JRPC(X,Y ) = β+γ 2 EP ′ [r 2 α,β,γ(x, y)]. Proof: Plug in the optimal solution f∗(x, y) = dPX,Y −α·dPXPYβ·dPX,Y +γ·dPXPY (see Lemma 2) into JRPC: JRPC = EPXY [f∗(x, y)]− αEPXPY [f∗(x, y)]− β 2 EPXY [ f∗2(x, y) ] − γ 2 EPXPY [ f∗2(x, y) ] = ∫ f∗(x, y) · ( dPXY − α · dPXPY ) − 1 2 f∗2(x, y) · ( β · dPXY + γ · dPXPY ) = ∫ dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ( dPXY − α · dPXPY ) − 1 2 ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = β + γ 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β β + γ · dPXY + γ β + γ · dPXPY ) . Since we define rα,β,γ = dPX,Y −α·dPXPY β·dPX,Y +γ·dPXPY and P ′ = ββ+γPXY + γ β+γPXPY , JRPC = β + γ 2 EP ′ [r2α,β,γ(x, y)]. A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT The proof contains two parts: showing 0 ≤ JRPC ≤ 12β + α2 2γ (see Section A.3.1) and Ĵ m,n RPC is a consistent estimator for JRPC (see Section A.3.2). A.3.1 BOUNDNESS OF JRPC Lemma 5 (Boundness of JRPC) 0 ≤ JRPC ≤ 12β + α2 2γ Proof: Lemma 4 suggests JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] with P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X,Y ) ≥ 0. We leverage the intermediate results in the proof of Lemma 4: JRPC(X,Y ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ dPX,Y ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) − α 2 ∫ dPXPY ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) = 1 2 EPXY [rα,β,γ(x, y)]− α 2 EPXPY [rα,β,γ(x, y)]. Since −αγ ≤ rα,β,γ ≤ 1 β , JRPC(X,Y ) ≤ 1 2β + α2 2γ . A.3.2 CONSISTENCY We first recall the definition of the estimation of JRPC: Definition 2 (Ĵm,nRPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, Ĵm,nRPC = sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j). Our goal is to show that Ĵm,nRPC is a consistent estimator for JRPC. We begin with the following definition: Ĵm,nRPC,θ := 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) (3) and E [ ĴRPC,θ ] := EPXY [fθ(x, y)]−αEPXPY [fθ(x, y)]− β 2 EPXY [f2θ (x, y)]− γ 2 EPXPY [f2θ (x, y)]. (4) Then, we follow the steps: • The first part is about estimation. We show that, with high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. • The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ∗ such that E [ ĴRPC,θ∗ ] is close to JRPC. Part I - Estimation: With high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows: Assumption 1 (boundness of fθ) There exist universal constants such that ∀fθ ∈ FΘ, CL ≤ fθ ≤ CU . For notations simplicity, we let M = CU − CL be the range of fθ and U = max {|CU |, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = −αγ and CU = 1 β since the optimal function f ∗ has −αγ ≤ f ∗ ≤ 1β . Assumption 2 (smoothness of fθ) There exists constant ρ > 0 such that ∀(x, y) ∈ (X × Y) and θ1, θ2 ∈ Θ, |fθ1(x, y)− fθ2(x, y)| ≤ ρ|θ1 − θ2|. Now, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998): Lemma 6 (Estimation) Let > 0 and N (Θ, ) be the covering number of Θ with radius . Then, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Proof: For notation simplicity, we define the operators • P (f) = EPXY [f(x, y)] and Pn(f) = 1n ∑n i=1 f(xi, yi) • Q(f) = EPXPY [f(x, y)] and Qm(f) = 1m ∑m j=1 f(x ′ j , y ′ j) Hence,∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ = ∣∣Pn(fθ)− P (fθ)− αQm(fθ) + αQ(fθ)− βPn(f2θ ) + βP (f2θ )− γQm(f2θ ) + γQ(f2θ )∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ Let ′ = 4ρ ( 1+α+2(β+γ)U ) and T := N (Θ, ′). LetC = {fθ1 , fθ2 , · · · , fθT }with {θ1, θ2, · · · , θT } be such that B∞(θ1, ′), · · · , B∞(θT , ′) are ′ cover. Hence, for any fθ ∈ FΘ, there is an fθk ∈ C such that ‖θ − θk‖∞ ≤ ′. Then, for any fθk ∈ C:∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ ≤ |Pn(fθk)− P (fθk)|+ |Pn(fθ)− Pn(fθk)|+ |P (fθ)− P (fθk)| + α ( |Qm(fθk)−Q(fθk)|+ |Qm(fθ)−Qm(fθk)|+ |Q(fθ)−Q(fθk)| ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ ∣∣Pn(f2θ )− Pn(f2θk)∣∣+ ∣∣P (f2θ )− P (f2θk)∣∣ ) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ ∣∣Qm(f2θ )−Qm(f2θk)∣∣+ ∣∣Q(f2θ )−Q(f2θk)∣∣ ) ≤ |Pn(fθk)− P (fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ + α ( |Qm(fθk)−Q(fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) = |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ + 2ρ ( 1 + α+ 2(β + γ)U ) ‖θ − θk‖ ≤ |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 , where • |Pn(fθ)− Pn(fθk)| ≤ ρ‖θ − θk‖ due to Assumption 2, and the result also applies for |P (fθ)− P (fθk)|, |Qm(fθ)−Qm(fθk)|, and |Q(fθ)−Q(fθk)|. • ∣∣Pn(f2θ )− Pn(f2θk)∣∣ ≤ 2‖fθ‖∞ρ‖θ−θk‖ ≤ 2ρU‖θ−θk‖ due to Assumptions 1 and 2. The result also applies for ∣∣P (f2θ )− P (f2θk)∣∣, ∣∣Qm(f2θ )−Qm(f2θk)∣∣, and ∣∣Q(f2θ )−Q(f2θk)∣∣. Hence, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ≥ ) = Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2 ) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) + Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) + Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8)+ Pr(γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) . With Hoeffding’s inequality, • Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) ≤ 2exp ( − n 2 32M2 ) • Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) ≤ 2exp ( − m 2 32M2α2 ) • Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8) ≤ 2exp(− n 232U2β2) • Pr ( γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) ≤ 2exp(− m 232U2γ2) To conclude, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Part II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network Lemma 7 (Approximation (Hornik et al., 1989)) Let > 0. There exists d ∈ N and a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where Θ is compact, such that inf fθ∈FΘ ∣∣∣E[ĴRPC,θ]− JRPC∣∣∣ ≤ . Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ∗ such that, with high probability, Ĵm,nRPC,θ can approximate JRPC with n′ = min {n,m} at a rate of O(1/ √ n′): Proposition 3 With probability at least 1 − δ, ∃θ∗ ∈ Θ, |JRPC − Ĵm,nRPC,θ| = O( √ d+log (1/δ) n′ ), where n′ = min {n,m}. Proof: The proof follows by combining Lemma 6 and 7. First, Lemma 7 suggests, ∃θ∗ ∈ Θ,∣∣∣E[ĴRPC,θ∗]− JRPC∣∣∣ ≤ 2 . Next, we perform analysis on the estimation error, aiming to find n,m and the corresponding probability, such that ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ∗]∣∣∣ ≤ 2 . Applying Lemma 6 with the covering number of the neural network: ( N (Θ, ) = O ( exp ( d log (1/ ) )) (Anthony & Bartlett, 2009) ) and let n′ = min{n,m}: Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ 2 ) ≤2N (Θ, 8ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 128M2 ) + exp ( − m 2 128M2α2 ) + exp ( − n 2 128U2β2 ) + exp ( − m 2 128U2γ2 )) =O ( exp ( d log (1/ )− n′ 2 )) , where the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1− δ, we solve the such that exp ( d log (1/ )− n′ 2 ) ≤ δ. With log (x) ≤ x− 1, n′ 2 + d( − 1) ≥ n′ 2 + dlog ≥ log (1/δ), where this inequality holds when = O (√ d+ log (1/δ) n′ ) . A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT Here, we provide the variance analysis on Ĵm,nRPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ∗ satisfying f∗(x, y) = fθ∗(x, y) = rα,β,γ(x, y). Then we recall that Ĵ m,n RPC is a consistent estimator of J RPC (see Proposition 3), and under regular conditions, the estimated network parameter θ̂ in Ĵm,nRPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of Ĵm,nRPC,θ in equation 3 and let n ′ = min{n,m}, the asymptotic expansion of Ĵm,nRPC has Ĵm,nRPC,θ∗ = Ĵ m,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + o(‖θ∗ − θ̂‖) = Ĵm,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + op( 1√ n′ ) = Ĵm,n RPC,θ̂ + op( 1√ n′ ), (5) where ˙̂Jm,n RPC,θ̂ = 0 since θ̂ is the estimation from Ĵm,nRPC = sup fθ∈FΘ Ĵm,nRPC,θ. Next, we recall the definition in equation 4: E[ĴRPC,θ̂] = EPXY [fθ̂(x, y)]− αEPXPY [fθ̂(x, y)]− β 2 EPXY [f2θ̂ (x, y)]− γ 2 EPXPY [f2θ̂ (x, y)]. Likewise, the asymptotic expansion of E[ĴRPC,θ] has E[ĴRPC,θ̂] = E[ĴRPC,θ∗ ] + E[ ˙̂ JRPC,θ∗ ](θ̂ − θ∗) + o(‖θ̂ − θ∗‖) = E[ĴRPC,θ∗ ] + E[ ˙̂JRPC,θ∗ ](θ̂ − θ∗) + op( 1√ n′ ) = E[ĴRPC,θ∗ ] + op( 1√ n′ ), (6) where E[ ˙̂JRPC,θ∗ ] = 0 since E[ĴRPC,θ∗ ] = JRPC and θ∗ satisfying f∗(x, y) = fθ∗(x, y). Combining equations 5 and 6: Ĵm,n RPC,θ̂ − E[ĴRPC,θ̂] =Ĵ m,n RPC,θ∗ − JRPC + op( 1√ n′ ) = 1 n n∑ i=1 f∗θ (xi, yi)− α 1 m m∑ j=1 f∗θ (x ′ j , y ′ j)− β 2 1 n n∑ i=1 f2θ∗(xi, yi)− γ 2 1 m m∑ j=1 f2θ∗(x ′ j , y ′ j) − EPXY [f∗(x, y)] + αEPXPY [f∗(x, y)] + β 2 EPXY [ f∗2(x, y) ] + γ 2 EPXPY [ f∗2(x, y) ] + op( 1√ n′ ) = 1 n n∑ i=1 rα,β,γ(xi, yi)− α 1 m m∑ j=1 rα,β,γ(x ′ j , y ′ j)− β 2 1 n n∑ i=1 r2α,β,γ(xi, yi)− γ 2 1 m m∑ j=1 r2α,β,γ(x ′ j , y ′ j) − EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β 2 EPXY [ r2α,β,γ(x, y) ] + γ 2 EPXPY [ r2α,β,γ(x, y) ] + op( 1√ n′ ) = 1√ n · 1√ n n∑ i=1 ( rα,β,γ(xi, yi)− β 2 r2α,β,γ(xi, yi)− EPXY [ rα,β,γ(x, y)− β 2 r2α,β,γ(x, y) ]) − 1√ m · 1√ m m∑ j=1 ( αrα,β,γ(x ′ j , y ′ j) + γ 2 r2α,β,γ(x ′ j , y ′ j)− EPXPY [ αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y) ]) + op( 1√ n′ ). Therefore, the asymptotic Variance of Ĵm,nRPC is Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ). First, we look at VarPXY [rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y)]. Since β > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − 2αγ+βα 2 2γ2 ≤ rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y) ≤ 12β . Hence, VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} . Next, we look at VarPXPY [αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y)]. Since α ≥ 0, γ > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − α2 2γ ≤ αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y) ≤ 2αβ+γ 2β2 . Hence, VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Combining everything together, we restate the Proposition 2 in the main text: Proposition 4 (Asymptotic Variance of Ĵm,nRPC) Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ) ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} + o( 1 n′ ) A.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ As discussed in Assumption 1, for the estimation Ĵm,nRPC, we can bound the function fθ in FΘ within [−αγ , 1 β ] without losing precision. Then, re-arranging Ĵ m,n RPC: sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) sup fθ∈FΘ 1 n n∑ i=1 ( fθ(xi, yi)− β 2 f2θ (xi, yi) ) + 1 m n∑ j=m ( αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j) ) Then, since −αγ ≤ fθ(·, ·) ≤ 1 β , basic calculations give us −2αγ + βα 2 2γ2 ≤ fθ(xi, yi)− β 2 f2θ (xi, yi) ≤ 1 2β and −α 2 2γ ≤ αfθ(x′j , y′j)+ γ 2 f2θ (x ′ j , y ′ j) ≤ 2αβ + γ 2β2 . The resulting variances have Var[fθ(xi, yi)− β 2 f2θ (xi, yi)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} and Var[αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Taking the mean of m,n independent random variables gives the result: Proposition 5 (Variance of Ĵm,nRPC) Var[Ĵm,nRPC] ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . A.6 IMPLEMENTATION OF EXPERIMENTS For visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_CrossModal.. A.7 RELATIVE PREDICTIVE CODING ON VISION The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x′2k−1 and x′2k. Then a base encoder f(·) implemented using ResNet (He et al., 2016) will extract representations from augmented views, creating representations h2k−1 and h2k. Later a small neural network g(·) called projection head will map h2k−1 and h2k to z2k−1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x′2k−1 and x ′ 2k and 2(N − 1) negative samples. The RPC loss between a pair of positive views, x′i and x ′ j (augmented from the same image) , can be calculated by the substitution fθ(x ′ i,x ′ j) = (zi · zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC: `RPCi,j = −(si,j − α 2(N − 1) 2N∑ k=1 1[k 6=i]si,k − β 2 s2i,j − γ 2 · 2(N − 1) 2N∑ k=1 1[k6=i]s 2 i,k) (7) For losses other than RPC, a hidden normalization of si,j is often required by replacing zi · zj with (zi ·zj)/|zi||zj |. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization. A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS ImageNet Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer ResNet with selective kernels (SK) (Li et al., 2019) and 2× wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC’s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer ResNet and larger 152-layer ResNet respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for ResNet-50 and ResNet-152 respectively. CIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use ResNet (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 for standard 50-layer ResNet , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0. STL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a ResNet with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10−6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC. Confidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and ImageNet, using ResNet-18, ResNet-18 and ResNet-50 respectively (95% confi- 3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017). dence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant. A.9 RELATIVE PREDICTIVE CODING ON SPEECH For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . ,ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled fromN , a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h)>Wkct), where Wk is a learnable linear transformation defined separately for each k ∈ {1, ...,K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as: `RPCt,k = −(fk(ht+k, ct)− α |N | ∑
1. What is the focus of the paper, particularly in terms of the proposed objective for self-supervised contrastive learning? 2. What are the advantages of the proposed method compared to other objectives for contrastive learning? 3. What are the concerns regarding the introduction of new hyperparameters in the proposed method? 4. What information is missing in the experimental results presented in the paper? 5. How does the reviewer assess the comparison of results among different methods in the paper?
Review
Review This paper proposes a new objective for self-supervised contrastive learning. In the general framework proposed by Tsai et al. (2020b), the proposed method boils down to using a divergence related to χ 2 -divergence. Compared to other objectives for contrastive learning, the authors illustrate the advantages of the proposed one in training stability (or easiness to train), sensitivity to batch size, and downstream task performance. However, introducing three new hyperparameters is a cause of concern since they make it more difficult to select optimal hyperparameters. Also, some important details of the experiments are missing. For example, how many runs to obtain the results shown in Tables 2 & 3? What's the confidence interval on the results? Any test to establish the statistical significance? What are the settings for supervised training? When the authors compare the results among different methods, did they select the optimal hyperparameters (e.g., learning rate) separately for each method?
ICLR
Title Self-supervised Representation Learning with Relative Predictive Coding Abstract This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1. 1 INTRODUCTION Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivière et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019). Prior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods. The first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, SimCLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC 1Project page: https://github.com/martinmamql/relative_predictive_coding is the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020). This paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a `2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019). To empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015) and speech recognition on LibriSpeech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance. 2 PROPOSED METHOD This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix. Notation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X ). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) ∼ PXY with PXY being the joint distribution of X × Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) ∼ PXPY with PXPY being the product of marginal distributions overX×Y . Last, we define f ∈ F for F being any class of functions f : X ×Y → R. 2.1 PRELIMINARY Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair of representations (x, y) from their joint distribution ((x, y) ∼ PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) ∼ PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1. We instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY ‖PXPY ) with DKL referring to the KL-divergence: JCPC(X,Y ) := sup f∈F E(x,y1)∼PXY ,{yj}Nj=2∼PY [ log ef(x,y1) 1 N ∑N j=1 e f(x,yj) ] . (1) Then, Oord et al. (2018) presents to maximize JCPC(X,Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine(·) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant. The category of modeling DKL(PXY ‖PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X,Y ) = JNWJ(X,Y ) = DKL(PXY ‖PXPY ). The other divergence measurements considered in prior work are DJS(PXY ‖PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY ‖PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY ‖PXPY ) is the Jensen-Shannon f-GAN objective( JJS (Nowozin et al., 2016; Hjelm et al., 2018) ) , where JJS(X,Y ) = 2 ( DJS(PXY ‖PXPY ) − log 2 ) .2 The instance of modeling DWass(PXY ‖PXPY ) is the Wasserstein Predictive Coding( JWPC (Ozair et al., 2019) ) , where JWPC(X,Y ) modifies JCPC(X,Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X × Y) to R, and thus FL ⊂ F . Ozair et al. (2019) shows that JWPC(X,Y ) is the lower bound of bothDKL(PXY ‖PXPY ) andDWass(PXY ‖PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks. After discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, · · · , yN}, with (x, y1) ∼ PXY , (x, yj>1) ∼ PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}Nj=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider 2JJS(X,Y ) achieves its supreme value when f∗(x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f∗(x, y) into JJS(X,Y ), we can conclude JJS(X,Y ) = 2(DJS(PXY ‖PXPY )− log 2). the instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1- Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/100 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). 2.2 RELATIVE PREDICTIVE CODING In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above: JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] , (2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a `2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma: Lemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x,y)−αβ r(x,y)+γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Lemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as Ĵ m,n RPC: Definition 1 (Ĵm,nRPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Then, Ĵm,nRPC = supfθ∈FΘ 1 n ∑n i=1 fθ(xi, yi)− 1 m ∑m j=1 αfθ(x ′ j , y ′ j)− 1n ∑n i=1 β 2 f 2 θ (xi, yi)− 1m ∑m j=1 γ 2 f 2 θ (x ′ j , y ′ j). Proposition 1 (Boundedness of Ĵm,nRPC, informal) 0 ≤ JRPC ≤ 1 2β + α2 2γ . Then, with probability at least 1− δ, |JRPC − Ĵm,nRPC| = O( √ d+log (1/δ) n′ ), where n ′ = min {n,m}. Proposition 2 (Variance of Ĵm,nRPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[Ĵm,nRPC] = O ( c1 n + c2 m ) . From the two propositions, whenm and n are large, i.e., the sample sizes are large, Ĵm,nRPC is bounded, and its variance vanishes to 0. First, the boundedness of Ĵm,nRPC suggests Ĵ m,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than logN (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of Ĵm,nRPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY ‖PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X,Y ) as JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 − 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ. 3 EXPERIMENTS We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivière et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix. Datasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015). CIFAR-10/-100 and ImageNet contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider LibriSpeech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16kHz English speech from 251 speakers with 41 types of phonemes. Training and Evaluation Details. For the vision experiments, we follow the setup from SimCLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivière et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. . Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) ∼ PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. . Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. . Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data’s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data’s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor- malization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC. 3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (ResNet with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC. Regarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks. 3.2 TRAINING STABILITY We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the SimCLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to NaN and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better. 3.3 MINIBATCH SIZE SENSITIVITY We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train SimCLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivière et al. (2020) on LibriSpeech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings. 3.4 EFFECT OF RELATIVE PARAMETERS We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train SimCLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α ∈ {0, 0.001, 1.0}, β ∈ {0, 0.001, 1.0}, γ ∈ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of `2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on ImageNet (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10. 3.5 RELATION TO MUTUAL INFORMATION ESTIMATION The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X;Y ) = DKL(PXY ‖PXPY ). Lemma 1 states that given optimal solution f∗(x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1−βf∗(x,y) − γ β . We can empirically estimate r̂(x, y) from the estimated f̂(x, y) via this transformation, and use r̂(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X;Y ) ≈ 1n ∑n i=1 log r̂(xi, yi) with (xi, yi) ∼ P⊗nX,Y , where P ⊗n X,Y is the uniformly sampled empirical distribution of PX,Y . We follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X;Y ). Then, we perform a cubic transformation on y so that y 7→ y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X;Y ) = −10log (1 − ρ2). The models are trained on 20, 000 steps with I(X;Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (DoE) (McAllester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. DoE (McAllester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the DoE method. We would like to highlight our method’s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information. 4 RELATED WORK As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input’s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from its upper part (ImageGPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem. Theoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations. Our work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P ‖Q), these approaches consider D(P ‖αP + (1 − α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P ‖Q) = 0.5DKL(P ‖ 0.5P + 0.5Q) + 0.5DKL(Q ‖ 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations. 5 CONCLUSION In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning. ACKNOWLEDGEMENT This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIA’s GPU support and Cloud TPU support from Google’s TensorFlow Research Cloud (TFRC). A APPENDIX A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] and r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x, y)− α β r(x, y) + γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Proof: The second-order functional derivative of the objective is −βdPX,Y − γdPXPY , which is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative ∂JRPC∂m and set it to zero: dPX,Y − α · dPXPY − β · f(x, y) · dPX,Y − γ · f(x, y) · dPXPY = 0. We then get f∗(x, y) = dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY = p(x, y)− αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y)− α βr(x, y) + γ . Since 0 ≤ r(x, y) ≤ ∞, we have −αγ ≤ r(x,y)−α βr(x,y)+γ ≤ 1 β . Hence, ∀β 6= 0, γ 6= 0, f∗(x, y) := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . A.2 RELATION BETWEEN JRPC AND Dχ2 In this subsection, we aim to show the following: 1) Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1; and 2) JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] by having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Lemma 3 Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)]− 1 Proof: By definition (Nielsen & Nock, 2013), Dχ2(PXY ‖PXPY ) = ∫ (dPXY )2 dPXPY − 1 = ∫ ( dPXY dPXPY )2 dPXPY − 1 = ∫ ( p(x, y) p(x)p(y) )2 dPXPY − 1 = ∫ r2(x, y)dPXPY − 1 = EPXPY [r2(x, y)]− 1. Lemma 4 Defining P ′ = ββ+γPXY + γ β+γPXPY as a mixture distribution of PXY and PXPY , JRPC(X,Y ) = β+γ 2 EP ′ [r 2 α,β,γ(x, y)]. Proof: Plug in the optimal solution f∗(x, y) = dPX,Y −α·dPXPYβ·dPX,Y +γ·dPXPY (see Lemma 2) into JRPC: JRPC = EPXY [f∗(x, y)]− αEPXPY [f∗(x, y)]− β 2 EPXY [ f∗2(x, y) ] − γ 2 EPXPY [ f∗2(x, y) ] = ∫ f∗(x, y) · ( dPXY − α · dPXPY ) − 1 2 f∗2(x, y) · ( β · dPXY + γ · dPXPY ) = ∫ dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ( dPXY − α · dPXPY ) − 1 2 ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = β + γ 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β β + γ · dPXY + γ β + γ · dPXPY ) . Since we define rα,β,γ = dPX,Y −α·dPXPY β·dPX,Y +γ·dPXPY and P ′ = ββ+γPXY + γ β+γPXPY , JRPC = β + γ 2 EP ′ [r2α,β,γ(x, y)]. A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT The proof contains two parts: showing 0 ≤ JRPC ≤ 12β + α2 2γ (see Section A.3.1) and Ĵ m,n RPC is a consistent estimator for JRPC (see Section A.3.2). A.3.1 BOUNDNESS OF JRPC Lemma 5 (Boundness of JRPC) 0 ≤ JRPC ≤ 12β + α2 2γ Proof: Lemma 4 suggests JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] with P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X,Y ) ≥ 0. We leverage the intermediate results in the proof of Lemma 4: JRPC(X,Y ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ dPX,Y ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) − α 2 ∫ dPXPY ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) = 1 2 EPXY [rα,β,γ(x, y)]− α 2 EPXPY [rα,β,γ(x, y)]. Since −αγ ≤ rα,β,γ ≤ 1 β , JRPC(X,Y ) ≤ 1 2β + α2 2γ . A.3.2 CONSISTENCY We first recall the definition of the estimation of JRPC: Definition 2 (Ĵm,nRPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, Ĵm,nRPC = sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j). Our goal is to show that Ĵm,nRPC is a consistent estimator for JRPC. We begin with the following definition: Ĵm,nRPC,θ := 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) (3) and E [ ĴRPC,θ ] := EPXY [fθ(x, y)]−αEPXPY [fθ(x, y)]− β 2 EPXY [f2θ (x, y)]− γ 2 EPXPY [f2θ (x, y)]. (4) Then, we follow the steps: • The first part is about estimation. We show that, with high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. • The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ∗ such that E [ ĴRPC,θ∗ ] is close to JRPC. Part I - Estimation: With high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows: Assumption 1 (boundness of fθ) There exist universal constants such that ∀fθ ∈ FΘ, CL ≤ fθ ≤ CU . For notations simplicity, we let M = CU − CL be the range of fθ and U = max {|CU |, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = −αγ and CU = 1 β since the optimal function f ∗ has −αγ ≤ f ∗ ≤ 1β . Assumption 2 (smoothness of fθ) There exists constant ρ > 0 such that ∀(x, y) ∈ (X × Y) and θ1, θ2 ∈ Θ, |fθ1(x, y)− fθ2(x, y)| ≤ ρ|θ1 − θ2|. Now, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998): Lemma 6 (Estimation) Let > 0 and N (Θ, ) be the covering number of Θ with radius . Then, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Proof: For notation simplicity, we define the operators • P (f) = EPXY [f(x, y)] and Pn(f) = 1n ∑n i=1 f(xi, yi) • Q(f) = EPXPY [f(x, y)] and Qm(f) = 1m ∑m j=1 f(x ′ j , y ′ j) Hence,∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ = ∣∣Pn(fθ)− P (fθ)− αQm(fθ) + αQ(fθ)− βPn(f2θ ) + βP (f2θ )− γQm(f2θ ) + γQ(f2θ )∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ Let ′ = 4ρ ( 1+α+2(β+γ)U ) and T := N (Θ, ′). LetC = {fθ1 , fθ2 , · · · , fθT }with {θ1, θ2, · · · , θT } be such that B∞(θ1, ′), · · · , B∞(θT , ′) are ′ cover. Hence, for any fθ ∈ FΘ, there is an fθk ∈ C such that ‖θ − θk‖∞ ≤ ′. Then, for any fθk ∈ C:∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ ≤ |Pn(fθk)− P (fθk)|+ |Pn(fθ)− Pn(fθk)|+ |P (fθ)− P (fθk)| + α ( |Qm(fθk)−Q(fθk)|+ |Qm(fθ)−Qm(fθk)|+ |Q(fθ)−Q(fθk)| ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ ∣∣Pn(f2θ )− Pn(f2θk)∣∣+ ∣∣P (f2θ )− P (f2θk)∣∣ ) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ ∣∣Qm(f2θ )−Qm(f2θk)∣∣+ ∣∣Q(f2θ )−Q(f2θk)∣∣ ) ≤ |Pn(fθk)− P (fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ + α ( |Qm(fθk)−Q(fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) = |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ + 2ρ ( 1 + α+ 2(β + γ)U ) ‖θ − θk‖ ≤ |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 , where • |Pn(fθ)− Pn(fθk)| ≤ ρ‖θ − θk‖ due to Assumption 2, and the result also applies for |P (fθ)− P (fθk)|, |Qm(fθ)−Qm(fθk)|, and |Q(fθ)−Q(fθk)|. • ∣∣Pn(f2θ )− Pn(f2θk)∣∣ ≤ 2‖fθ‖∞ρ‖θ−θk‖ ≤ 2ρU‖θ−θk‖ due to Assumptions 1 and 2. The result also applies for ∣∣P (f2θ )− P (f2θk)∣∣, ∣∣Qm(f2θ )−Qm(f2θk)∣∣, and ∣∣Q(f2θ )−Q(f2θk)∣∣. Hence, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ≥ ) = Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2 ) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) + Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) + Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8)+ Pr(γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) . With Hoeffding’s inequality, • Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) ≤ 2exp ( − n 2 32M2 ) • Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) ≤ 2exp ( − m 2 32M2α2 ) • Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8) ≤ 2exp(− n 232U2β2) • Pr ( γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) ≤ 2exp(− m 232U2γ2) To conclude, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Part II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network Lemma 7 (Approximation (Hornik et al., 1989)) Let > 0. There exists d ∈ N and a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where Θ is compact, such that inf fθ∈FΘ ∣∣∣E[ĴRPC,θ]− JRPC∣∣∣ ≤ . Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ∗ such that, with high probability, Ĵm,nRPC,θ can approximate JRPC with n′ = min {n,m} at a rate of O(1/ √ n′): Proposition 3 With probability at least 1 − δ, ∃θ∗ ∈ Θ, |JRPC − Ĵm,nRPC,θ| = O( √ d+log (1/δ) n′ ), where n′ = min {n,m}. Proof: The proof follows by combining Lemma 6 and 7. First, Lemma 7 suggests, ∃θ∗ ∈ Θ,∣∣∣E[ĴRPC,θ∗]− JRPC∣∣∣ ≤ 2 . Next, we perform analysis on the estimation error, aiming to find n,m and the corresponding probability, such that ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ∗]∣∣∣ ≤ 2 . Applying Lemma 6 with the covering number of the neural network: ( N (Θ, ) = O ( exp ( d log (1/ ) )) (Anthony & Bartlett, 2009) ) and let n′ = min{n,m}: Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ 2 ) ≤2N (Θ, 8ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 128M2 ) + exp ( − m 2 128M2α2 ) + exp ( − n 2 128U2β2 ) + exp ( − m 2 128U2γ2 )) =O ( exp ( d log (1/ )− n′ 2 )) , where the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1− δ, we solve the such that exp ( d log (1/ )− n′ 2 ) ≤ δ. With log (x) ≤ x− 1, n′ 2 + d( − 1) ≥ n′ 2 + dlog ≥ log (1/δ), where this inequality holds when = O (√ d+ log (1/δ) n′ ) . A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT Here, we provide the variance analysis on Ĵm,nRPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ∗ satisfying f∗(x, y) = fθ∗(x, y) = rα,β,γ(x, y). Then we recall that Ĵ m,n RPC is a consistent estimator of J RPC (see Proposition 3), and under regular conditions, the estimated network parameter θ̂ in Ĵm,nRPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of Ĵm,nRPC,θ in equation 3 and let n ′ = min{n,m}, the asymptotic expansion of Ĵm,nRPC has Ĵm,nRPC,θ∗ = Ĵ m,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + o(‖θ∗ − θ̂‖) = Ĵm,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + op( 1√ n′ ) = Ĵm,n RPC,θ̂ + op( 1√ n′ ), (5) where ˙̂Jm,n RPC,θ̂ = 0 since θ̂ is the estimation from Ĵm,nRPC = sup fθ∈FΘ Ĵm,nRPC,θ. Next, we recall the definition in equation 4: E[ĴRPC,θ̂] = EPXY [fθ̂(x, y)]− αEPXPY [fθ̂(x, y)]− β 2 EPXY [f2θ̂ (x, y)]− γ 2 EPXPY [f2θ̂ (x, y)]. Likewise, the asymptotic expansion of E[ĴRPC,θ] has E[ĴRPC,θ̂] = E[ĴRPC,θ∗ ] + E[ ˙̂ JRPC,θ∗ ](θ̂ − θ∗) + o(‖θ̂ − θ∗‖) = E[ĴRPC,θ∗ ] + E[ ˙̂JRPC,θ∗ ](θ̂ − θ∗) + op( 1√ n′ ) = E[ĴRPC,θ∗ ] + op( 1√ n′ ), (6) where E[ ˙̂JRPC,θ∗ ] = 0 since E[ĴRPC,θ∗ ] = JRPC and θ∗ satisfying f∗(x, y) = fθ∗(x, y). Combining equations 5 and 6: Ĵm,n RPC,θ̂ − E[ĴRPC,θ̂] =Ĵ m,n RPC,θ∗ − JRPC + op( 1√ n′ ) = 1 n n∑ i=1 f∗θ (xi, yi)− α 1 m m∑ j=1 f∗θ (x ′ j , y ′ j)− β 2 1 n n∑ i=1 f2θ∗(xi, yi)− γ 2 1 m m∑ j=1 f2θ∗(x ′ j , y ′ j) − EPXY [f∗(x, y)] + αEPXPY [f∗(x, y)] + β 2 EPXY [ f∗2(x, y) ] + γ 2 EPXPY [ f∗2(x, y) ] + op( 1√ n′ ) = 1 n n∑ i=1 rα,β,γ(xi, yi)− α 1 m m∑ j=1 rα,β,γ(x ′ j , y ′ j)− β 2 1 n n∑ i=1 r2α,β,γ(xi, yi)− γ 2 1 m m∑ j=1 r2α,β,γ(x ′ j , y ′ j) − EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β 2 EPXY [ r2α,β,γ(x, y) ] + γ 2 EPXPY [ r2α,β,γ(x, y) ] + op( 1√ n′ ) = 1√ n · 1√ n n∑ i=1 ( rα,β,γ(xi, yi)− β 2 r2α,β,γ(xi, yi)− EPXY [ rα,β,γ(x, y)− β 2 r2α,β,γ(x, y) ]) − 1√ m · 1√ m m∑ j=1 ( αrα,β,γ(x ′ j , y ′ j) + γ 2 r2α,β,γ(x ′ j , y ′ j)− EPXPY [ αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y) ]) + op( 1√ n′ ). Therefore, the asymptotic Variance of Ĵm,nRPC is Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ). First, we look at VarPXY [rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y)]. Since β > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − 2αγ+βα 2 2γ2 ≤ rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y) ≤ 12β . Hence, VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} . Next, we look at VarPXPY [αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y)]. Since α ≥ 0, γ > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − α2 2γ ≤ αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y) ≤ 2αβ+γ 2β2 . Hence, VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Combining everything together, we restate the Proposition 2 in the main text: Proposition 4 (Asymptotic Variance of Ĵm,nRPC) Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ) ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} + o( 1 n′ ) A.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ As discussed in Assumption 1, for the estimation Ĵm,nRPC, we can bound the function fθ in FΘ within [−αγ , 1 β ] without losing precision. Then, re-arranging Ĵ m,n RPC: sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) sup fθ∈FΘ 1 n n∑ i=1 ( fθ(xi, yi)− β 2 f2θ (xi, yi) ) + 1 m n∑ j=m ( αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j) ) Then, since −αγ ≤ fθ(·, ·) ≤ 1 β , basic calculations give us −2αγ + βα 2 2γ2 ≤ fθ(xi, yi)− β 2 f2θ (xi, yi) ≤ 1 2β and −α 2 2γ ≤ αfθ(x′j , y′j)+ γ 2 f2θ (x ′ j , y ′ j) ≤ 2αβ + γ 2β2 . The resulting variances have Var[fθ(xi, yi)− β 2 f2θ (xi, yi)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} and Var[αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Taking the mean of m,n independent random variables gives the result: Proposition 5 (Variance of Ĵm,nRPC) Var[Ĵm,nRPC] ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . A.6 IMPLEMENTATION OF EXPERIMENTS For visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_CrossModal.. A.7 RELATIVE PREDICTIVE CODING ON VISION The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x′2k−1 and x′2k. Then a base encoder f(·) implemented using ResNet (He et al., 2016) will extract representations from augmented views, creating representations h2k−1 and h2k. Later a small neural network g(·) called projection head will map h2k−1 and h2k to z2k−1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x′2k−1 and x ′ 2k and 2(N − 1) negative samples. The RPC loss between a pair of positive views, x′i and x ′ j (augmented from the same image) , can be calculated by the substitution fθ(x ′ i,x ′ j) = (zi · zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC: `RPCi,j = −(si,j − α 2(N − 1) 2N∑ k=1 1[k 6=i]si,k − β 2 s2i,j − γ 2 · 2(N − 1) 2N∑ k=1 1[k6=i]s 2 i,k) (7) For losses other than RPC, a hidden normalization of si,j is often required by replacing zi · zj with (zi ·zj)/|zi||zj |. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization. A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS ImageNet Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer ResNet with selective kernels (SK) (Li et al., 2019) and 2× wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC’s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer ResNet and larger 152-layer ResNet respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for ResNet-50 and ResNet-152 respectively. CIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use ResNet (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 for standard 50-layer ResNet , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0. STL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a ResNet with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10−6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC. Confidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and ImageNet, using ResNet-18, ResNet-18 and ResNet-50 respectively (95% confi- 3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017). dence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant. A.9 RELATIVE PREDICTIVE CODING ON SPEECH For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . ,ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled fromN , a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h)>Wkct), where Wk is a learnable linear transformation defined separately for each k ∈ {1, ...,K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as: `RPCt,k = −(fk(ht+k, ct)− α |N | ∑
1. What is the focus and contribution of the paper on contrastive representation? 2. What are the strengths of the proposed objective in terms of training stability, minibatch size sensitivity, and downstream task performance? 3. Do you have any concerns regarding the theoretical analysis, particularly in explaining the reason behind the better downstream task performance? 4. How does the reviewer assess the relevance and significance of the missed reference in the paper? 5. What are the limitations of the proposed method compared to other approaches, such as DoE, in estimating mutual information? 6. Can you provide additional explanations or clarifications regarding the notation and the usage of joint distribution and product of marginal distributions? 7. Did you identify any typos or errors in the proof of Lemma 5?
Review
Review This paper presents a new contrastive representation objective that has good training stability, minibatch size sensitivity, and downstream task performance. This objective is a generalization of Chi-square divergence, the optimal solution is the density ratio of joint distribution and product of marginal distributions, the estimation is consistent and its variance goes to 0 as sample size goes to infinite, so the paper is theoretical sound. The authors conduct comprehensive experiments to show that the training based on this objective is stable, not sensitive to batch size and leads to good downstream task performance in vision and phoneme and speaker classification. However the theoretical results don't provide any clue to explain why this estimation leads to better downstream task performance, where in mutual information (MI) estimator it is easy to understand since MI is a good measure of dependency of two random variables. Section 3.5 states the relation to MI estimation, but it is just a plug-in method that won't be able to say anything on the goodness. Furthermore the paper misses an important reference, David McAllester and Karl Stratos, Formal Limitations on the Measurement of Mutual Information, AISTATS 2020, where David and Karl propose a method called DoE that has a good estimation of mutual information even when the mutual information is large. So I think the authors should compare RPC with DoE in the synthetic data experiment in Section 3.5 when the mutual information is large, say 100, instead of 10. In the case of large MI, all other methods fail to provide a good estimation. From my experience, when CPC is applied to ASR, the batch size is not a sensitive factor to WER results. In the notation and In section 2.1, second line from bottom, I don't think it is appropriate to use joint distribution for a related or positive pair, product of marginal distributions for an un-related or negative pair. It is the density ratio matters. For positive pair (X,Y), its MI is large, and for negative pair (X,Y), its MI close to 0. In the proof of Lemma 5, there is a typo in the second line from bottom, the second term should be expection of P(x)P(Y), not P(X,Y).
ICLR
Title Self-supervised Representation Learning with Relative Predictive Coding Abstract This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1. 1 INTRODUCTION Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivière et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019). Prior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods. The first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, SimCLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC 1Project page: https://github.com/martinmamql/relative_predictive_coding is the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020). This paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a `2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019). To empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015) and speech recognition on LibriSpeech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance. 2 PROPOSED METHOD This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix. Notation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X ). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) ∼ PXY with PXY being the joint distribution of X × Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) ∼ PXPY with PXPY being the product of marginal distributions overX×Y . Last, we define f ∈ F for F being any class of functions f : X ×Y → R. 2.1 PRELIMINARY Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair of representations (x, y) from their joint distribution ((x, y) ∼ PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) ∼ PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1. We instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY ‖PXPY ) with DKL referring to the KL-divergence: JCPC(X,Y ) := sup f∈F E(x,y1)∼PXY ,{yj}Nj=2∼PY [ log ef(x,y1) 1 N ∑N j=1 e f(x,yj) ] . (1) Then, Oord et al. (2018) presents to maximize JCPC(X,Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine(·) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant. The category of modeling DKL(PXY ‖PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X,Y ) = JNWJ(X,Y ) = DKL(PXY ‖PXPY ). The other divergence measurements considered in prior work are DJS(PXY ‖PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY ‖PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY ‖PXPY ) is the Jensen-Shannon f-GAN objective( JJS (Nowozin et al., 2016; Hjelm et al., 2018) ) , where JJS(X,Y ) = 2 ( DJS(PXY ‖PXPY ) − log 2 ) .2 The instance of modeling DWass(PXY ‖PXPY ) is the Wasserstein Predictive Coding( JWPC (Ozair et al., 2019) ) , where JWPC(X,Y ) modifies JCPC(X,Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X × Y) to R, and thus FL ⊂ F . Ozair et al. (2019) shows that JWPC(X,Y ) is the lower bound of bothDKL(PXY ‖PXPY ) andDWass(PXY ‖PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks. After discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, · · · , yN}, with (x, y1) ∼ PXY , (x, yj>1) ∼ PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}Nj=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider 2JJS(X,Y ) achieves its supreme value when f∗(x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f∗(x, y) into JJS(X,Y ), we can conclude JJS(X,Y ) = 2(DJS(PXY ‖PXPY )− log 2). the instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1- Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/100 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). 2.2 RELATIVE PREDICTIVE CODING In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above: JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] , (2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a `2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma: Lemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x,y)−αβ r(x,y)+γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Lemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as Ĵ m,n RPC: Definition 1 (Ĵm,nRPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Then, Ĵm,nRPC = supfθ∈FΘ 1 n ∑n i=1 fθ(xi, yi)− 1 m ∑m j=1 αfθ(x ′ j , y ′ j)− 1n ∑n i=1 β 2 f 2 θ (xi, yi)− 1m ∑m j=1 γ 2 f 2 θ (x ′ j , y ′ j). Proposition 1 (Boundedness of Ĵm,nRPC, informal) 0 ≤ JRPC ≤ 1 2β + α2 2γ . Then, with probability at least 1− δ, |JRPC − Ĵm,nRPC| = O( √ d+log (1/δ) n′ ), where n ′ = min {n,m}. Proposition 2 (Variance of Ĵm,nRPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[Ĵm,nRPC] = O ( c1 n + c2 m ) . From the two propositions, whenm and n are large, i.e., the sample sizes are large, Ĵm,nRPC is bounded, and its variance vanishes to 0. First, the boundedness of Ĵm,nRPC suggests Ĵ m,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than logN (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of Ĵm,nRPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY ‖PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X,Y ) as JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 − 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ. 3 EXPERIMENTS We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivière et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix. Datasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015). CIFAR-10/-100 and ImageNet contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider LibriSpeech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16kHz English speech from 251 speakers with 41 types of phonemes. Training and Evaluation Details. For the vision experiments, we follow the setup from SimCLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivière et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. . Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) ∼ PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. . Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. . Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data’s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data’s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor- malization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC. 3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (ResNet with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC. Regarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks. 3.2 TRAINING STABILITY We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the SimCLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to NaN and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better. 3.3 MINIBATCH SIZE SENSITIVITY We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train SimCLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivière et al. (2020) on LibriSpeech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings. 3.4 EFFECT OF RELATIVE PARAMETERS We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train SimCLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α ∈ {0, 0.001, 1.0}, β ∈ {0, 0.001, 1.0}, γ ∈ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of `2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on ImageNet (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10. 3.5 RELATION TO MUTUAL INFORMATION ESTIMATION The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X;Y ) = DKL(PXY ‖PXPY ). Lemma 1 states that given optimal solution f∗(x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1−βf∗(x,y) − γ β . We can empirically estimate r̂(x, y) from the estimated f̂(x, y) via this transformation, and use r̂(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X;Y ) ≈ 1n ∑n i=1 log r̂(xi, yi) with (xi, yi) ∼ P⊗nX,Y , where P ⊗n X,Y is the uniformly sampled empirical distribution of PX,Y . We follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X;Y ). Then, we perform a cubic transformation on y so that y 7→ y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X;Y ) = −10log (1 − ρ2). The models are trained on 20, 000 steps with I(X;Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (DoE) (McAllester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. DoE (McAllester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the DoE method. We would like to highlight our method’s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information. 4 RELATED WORK As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input’s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from its upper part (ImageGPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem. Theoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations. Our work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P ‖Q), these approaches consider D(P ‖αP + (1 − α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P ‖Q) = 0.5DKL(P ‖ 0.5P + 0.5Q) + 0.5DKL(Q ‖ 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations. 5 CONCLUSION In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning. ACKNOWLEDGEMENT This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIA’s GPU support and Cloud TPU support from Google’s TensorFlow Research Cloud (TFRC). A APPENDIX A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] and r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x, y)− α β r(x, y) + γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Proof: The second-order functional derivative of the objective is −βdPX,Y − γdPXPY , which is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative ∂JRPC∂m and set it to zero: dPX,Y − α · dPXPY − β · f(x, y) · dPX,Y − γ · f(x, y) · dPXPY = 0. We then get f∗(x, y) = dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY = p(x, y)− αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y)− α βr(x, y) + γ . Since 0 ≤ r(x, y) ≤ ∞, we have −αγ ≤ r(x,y)−α βr(x,y)+γ ≤ 1 β . Hence, ∀β 6= 0, γ 6= 0, f∗(x, y) := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . A.2 RELATION BETWEEN JRPC AND Dχ2 In this subsection, we aim to show the following: 1) Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1; and 2) JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] by having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Lemma 3 Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)]− 1 Proof: By definition (Nielsen & Nock, 2013), Dχ2(PXY ‖PXPY ) = ∫ (dPXY )2 dPXPY − 1 = ∫ ( dPXY dPXPY )2 dPXPY − 1 = ∫ ( p(x, y) p(x)p(y) )2 dPXPY − 1 = ∫ r2(x, y)dPXPY − 1 = EPXPY [r2(x, y)]− 1. Lemma 4 Defining P ′ = ββ+γPXY + γ β+γPXPY as a mixture distribution of PXY and PXPY , JRPC(X,Y ) = β+γ 2 EP ′ [r 2 α,β,γ(x, y)]. Proof: Plug in the optimal solution f∗(x, y) = dPX,Y −α·dPXPYβ·dPX,Y +γ·dPXPY (see Lemma 2) into JRPC: JRPC = EPXY [f∗(x, y)]− αEPXPY [f∗(x, y)]− β 2 EPXY [ f∗2(x, y) ] − γ 2 EPXPY [ f∗2(x, y) ] = ∫ f∗(x, y) · ( dPXY − α · dPXPY ) − 1 2 f∗2(x, y) · ( β · dPXY + γ · dPXPY ) = ∫ dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ( dPXY − α · dPXPY ) − 1 2 ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = β + γ 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β β + γ · dPXY + γ β + γ · dPXPY ) . Since we define rα,β,γ = dPX,Y −α·dPXPY β·dPX,Y +γ·dPXPY and P ′ = ββ+γPXY + γ β+γPXPY , JRPC = β + γ 2 EP ′ [r2α,β,γ(x, y)]. A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT The proof contains two parts: showing 0 ≤ JRPC ≤ 12β + α2 2γ (see Section A.3.1) and Ĵ m,n RPC is a consistent estimator for JRPC (see Section A.3.2). A.3.1 BOUNDNESS OF JRPC Lemma 5 (Boundness of JRPC) 0 ≤ JRPC ≤ 12β + α2 2γ Proof: Lemma 4 suggests JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] with P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X,Y ) ≥ 0. We leverage the intermediate results in the proof of Lemma 4: JRPC(X,Y ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ dPX,Y ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) − α 2 ∫ dPXPY ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) = 1 2 EPXY [rα,β,γ(x, y)]− α 2 EPXPY [rα,β,γ(x, y)]. Since −αγ ≤ rα,β,γ ≤ 1 β , JRPC(X,Y ) ≤ 1 2β + α2 2γ . A.3.2 CONSISTENCY We first recall the definition of the estimation of JRPC: Definition 2 (Ĵm,nRPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, Ĵm,nRPC = sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j). Our goal is to show that Ĵm,nRPC is a consistent estimator for JRPC. We begin with the following definition: Ĵm,nRPC,θ := 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) (3) and E [ ĴRPC,θ ] := EPXY [fθ(x, y)]−αEPXPY [fθ(x, y)]− β 2 EPXY [f2θ (x, y)]− γ 2 EPXPY [f2θ (x, y)]. (4) Then, we follow the steps: • The first part is about estimation. We show that, with high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. • The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ∗ such that E [ ĴRPC,θ∗ ] is close to JRPC. Part I - Estimation: With high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows: Assumption 1 (boundness of fθ) There exist universal constants such that ∀fθ ∈ FΘ, CL ≤ fθ ≤ CU . For notations simplicity, we let M = CU − CL be the range of fθ and U = max {|CU |, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = −αγ and CU = 1 β since the optimal function f ∗ has −αγ ≤ f ∗ ≤ 1β . Assumption 2 (smoothness of fθ) There exists constant ρ > 0 such that ∀(x, y) ∈ (X × Y) and θ1, θ2 ∈ Θ, |fθ1(x, y)− fθ2(x, y)| ≤ ρ|θ1 − θ2|. Now, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998): Lemma 6 (Estimation) Let > 0 and N (Θ, ) be the covering number of Θ with radius . Then, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Proof: For notation simplicity, we define the operators • P (f) = EPXY [f(x, y)] and Pn(f) = 1n ∑n i=1 f(xi, yi) • Q(f) = EPXPY [f(x, y)] and Qm(f) = 1m ∑m j=1 f(x ′ j , y ′ j) Hence,∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ = ∣∣Pn(fθ)− P (fθ)− αQm(fθ) + αQ(fθ)− βPn(f2θ ) + βP (f2θ )− γQm(f2θ ) + γQ(f2θ )∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ Let ′ = 4ρ ( 1+α+2(β+γ)U ) and T := N (Θ, ′). LetC = {fθ1 , fθ2 , · · · , fθT }with {θ1, θ2, · · · , θT } be such that B∞(θ1, ′), · · · , B∞(θT , ′) are ′ cover. Hence, for any fθ ∈ FΘ, there is an fθk ∈ C such that ‖θ − θk‖∞ ≤ ′. Then, for any fθk ∈ C:∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ ≤ |Pn(fθk)− P (fθk)|+ |Pn(fθ)− Pn(fθk)|+ |P (fθ)− P (fθk)| + α ( |Qm(fθk)−Q(fθk)|+ |Qm(fθ)−Qm(fθk)|+ |Q(fθ)−Q(fθk)| ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ ∣∣Pn(f2θ )− Pn(f2θk)∣∣+ ∣∣P (f2θ )− P (f2θk)∣∣ ) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ ∣∣Qm(f2θ )−Qm(f2θk)∣∣+ ∣∣Q(f2θ )−Q(f2θk)∣∣ ) ≤ |Pn(fθk)− P (fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ + α ( |Qm(fθk)−Q(fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) = |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ + 2ρ ( 1 + α+ 2(β + γ)U ) ‖θ − θk‖ ≤ |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 , where • |Pn(fθ)− Pn(fθk)| ≤ ρ‖θ − θk‖ due to Assumption 2, and the result also applies for |P (fθ)− P (fθk)|, |Qm(fθ)−Qm(fθk)|, and |Q(fθ)−Q(fθk)|. • ∣∣Pn(f2θ )− Pn(f2θk)∣∣ ≤ 2‖fθ‖∞ρ‖θ−θk‖ ≤ 2ρU‖θ−θk‖ due to Assumptions 1 and 2. The result also applies for ∣∣P (f2θ )− P (f2θk)∣∣, ∣∣Qm(f2θ )−Qm(f2θk)∣∣, and ∣∣Q(f2θ )−Q(f2θk)∣∣. Hence, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ≥ ) = Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2 ) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) + Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) + Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8)+ Pr(γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) . With Hoeffding’s inequality, • Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) ≤ 2exp ( − n 2 32M2 ) • Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) ≤ 2exp ( − m 2 32M2α2 ) • Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8) ≤ 2exp(− n 232U2β2) • Pr ( γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) ≤ 2exp(− m 232U2γ2) To conclude, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Part II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network Lemma 7 (Approximation (Hornik et al., 1989)) Let > 0. There exists d ∈ N and a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where Θ is compact, such that inf fθ∈FΘ ∣∣∣E[ĴRPC,θ]− JRPC∣∣∣ ≤ . Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ∗ such that, with high probability, Ĵm,nRPC,θ can approximate JRPC with n′ = min {n,m} at a rate of O(1/ √ n′): Proposition 3 With probability at least 1 − δ, ∃θ∗ ∈ Θ, |JRPC − Ĵm,nRPC,θ| = O( √ d+log (1/δ) n′ ), where n′ = min {n,m}. Proof: The proof follows by combining Lemma 6 and 7. First, Lemma 7 suggests, ∃θ∗ ∈ Θ,∣∣∣E[ĴRPC,θ∗]− JRPC∣∣∣ ≤ 2 . Next, we perform analysis on the estimation error, aiming to find n,m and the corresponding probability, such that ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ∗]∣∣∣ ≤ 2 . Applying Lemma 6 with the covering number of the neural network: ( N (Θ, ) = O ( exp ( d log (1/ ) )) (Anthony & Bartlett, 2009) ) and let n′ = min{n,m}: Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ 2 ) ≤2N (Θ, 8ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 128M2 ) + exp ( − m 2 128M2α2 ) + exp ( − n 2 128U2β2 ) + exp ( − m 2 128U2γ2 )) =O ( exp ( d log (1/ )− n′ 2 )) , where the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1− δ, we solve the such that exp ( d log (1/ )− n′ 2 ) ≤ δ. With log (x) ≤ x− 1, n′ 2 + d( − 1) ≥ n′ 2 + dlog ≥ log (1/δ), where this inequality holds when = O (√ d+ log (1/δ) n′ ) . A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT Here, we provide the variance analysis on Ĵm,nRPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ∗ satisfying f∗(x, y) = fθ∗(x, y) = rα,β,γ(x, y). Then we recall that Ĵ m,n RPC is a consistent estimator of J RPC (see Proposition 3), and under regular conditions, the estimated network parameter θ̂ in Ĵm,nRPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of Ĵm,nRPC,θ in equation 3 and let n ′ = min{n,m}, the asymptotic expansion of Ĵm,nRPC has Ĵm,nRPC,θ∗ = Ĵ m,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + o(‖θ∗ − θ̂‖) = Ĵm,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + op( 1√ n′ ) = Ĵm,n RPC,θ̂ + op( 1√ n′ ), (5) where ˙̂Jm,n RPC,θ̂ = 0 since θ̂ is the estimation from Ĵm,nRPC = sup fθ∈FΘ Ĵm,nRPC,θ. Next, we recall the definition in equation 4: E[ĴRPC,θ̂] = EPXY [fθ̂(x, y)]− αEPXPY [fθ̂(x, y)]− β 2 EPXY [f2θ̂ (x, y)]− γ 2 EPXPY [f2θ̂ (x, y)]. Likewise, the asymptotic expansion of E[ĴRPC,θ] has E[ĴRPC,θ̂] = E[ĴRPC,θ∗ ] + E[ ˙̂ JRPC,θ∗ ](θ̂ − θ∗) + o(‖θ̂ − θ∗‖) = E[ĴRPC,θ∗ ] + E[ ˙̂JRPC,θ∗ ](θ̂ − θ∗) + op( 1√ n′ ) = E[ĴRPC,θ∗ ] + op( 1√ n′ ), (6) where E[ ˙̂JRPC,θ∗ ] = 0 since E[ĴRPC,θ∗ ] = JRPC and θ∗ satisfying f∗(x, y) = fθ∗(x, y). Combining equations 5 and 6: Ĵm,n RPC,θ̂ − E[ĴRPC,θ̂] =Ĵ m,n RPC,θ∗ − JRPC + op( 1√ n′ ) = 1 n n∑ i=1 f∗θ (xi, yi)− α 1 m m∑ j=1 f∗θ (x ′ j , y ′ j)− β 2 1 n n∑ i=1 f2θ∗(xi, yi)− γ 2 1 m m∑ j=1 f2θ∗(x ′ j , y ′ j) − EPXY [f∗(x, y)] + αEPXPY [f∗(x, y)] + β 2 EPXY [ f∗2(x, y) ] + γ 2 EPXPY [ f∗2(x, y) ] + op( 1√ n′ ) = 1 n n∑ i=1 rα,β,γ(xi, yi)− α 1 m m∑ j=1 rα,β,γ(x ′ j , y ′ j)− β 2 1 n n∑ i=1 r2α,β,γ(xi, yi)− γ 2 1 m m∑ j=1 r2α,β,γ(x ′ j , y ′ j) − EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β 2 EPXY [ r2α,β,γ(x, y) ] + γ 2 EPXPY [ r2α,β,γ(x, y) ] + op( 1√ n′ ) = 1√ n · 1√ n n∑ i=1 ( rα,β,γ(xi, yi)− β 2 r2α,β,γ(xi, yi)− EPXY [ rα,β,γ(x, y)− β 2 r2α,β,γ(x, y) ]) − 1√ m · 1√ m m∑ j=1 ( αrα,β,γ(x ′ j , y ′ j) + γ 2 r2α,β,γ(x ′ j , y ′ j)− EPXPY [ αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y) ]) + op( 1√ n′ ). Therefore, the asymptotic Variance of Ĵm,nRPC is Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ). First, we look at VarPXY [rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y)]. Since β > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − 2αγ+βα 2 2γ2 ≤ rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y) ≤ 12β . Hence, VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} . Next, we look at VarPXPY [αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y)]. Since α ≥ 0, γ > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − α2 2γ ≤ αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y) ≤ 2αβ+γ 2β2 . Hence, VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Combining everything together, we restate the Proposition 2 in the main text: Proposition 4 (Asymptotic Variance of Ĵm,nRPC) Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ) ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} + o( 1 n′ ) A.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ As discussed in Assumption 1, for the estimation Ĵm,nRPC, we can bound the function fθ in FΘ within [−αγ , 1 β ] without losing precision. Then, re-arranging Ĵ m,n RPC: sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) sup fθ∈FΘ 1 n n∑ i=1 ( fθ(xi, yi)− β 2 f2θ (xi, yi) ) + 1 m n∑ j=m ( αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j) ) Then, since −αγ ≤ fθ(·, ·) ≤ 1 β , basic calculations give us −2αγ + βα 2 2γ2 ≤ fθ(xi, yi)− β 2 f2θ (xi, yi) ≤ 1 2β and −α 2 2γ ≤ αfθ(x′j , y′j)+ γ 2 f2θ (x ′ j , y ′ j) ≤ 2αβ + γ 2β2 . The resulting variances have Var[fθ(xi, yi)− β 2 f2θ (xi, yi)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} and Var[αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Taking the mean of m,n independent random variables gives the result: Proposition 5 (Variance of Ĵm,nRPC) Var[Ĵm,nRPC] ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . A.6 IMPLEMENTATION OF EXPERIMENTS For visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_CrossModal.. A.7 RELATIVE PREDICTIVE CODING ON VISION The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x′2k−1 and x′2k. Then a base encoder f(·) implemented using ResNet (He et al., 2016) will extract representations from augmented views, creating representations h2k−1 and h2k. Later a small neural network g(·) called projection head will map h2k−1 and h2k to z2k−1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x′2k−1 and x ′ 2k and 2(N − 1) negative samples. The RPC loss between a pair of positive views, x′i and x ′ j (augmented from the same image) , can be calculated by the substitution fθ(x ′ i,x ′ j) = (zi · zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC: `RPCi,j = −(si,j − α 2(N − 1) 2N∑ k=1 1[k 6=i]si,k − β 2 s2i,j − γ 2 · 2(N − 1) 2N∑ k=1 1[k6=i]s 2 i,k) (7) For losses other than RPC, a hidden normalization of si,j is often required by replacing zi · zj with (zi ·zj)/|zi||zj |. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization. A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS ImageNet Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer ResNet with selective kernels (SK) (Li et al., 2019) and 2× wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC’s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer ResNet and larger 152-layer ResNet respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for ResNet-50 and ResNet-152 respectively. CIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use ResNet (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 for standard 50-layer ResNet , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0. STL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a ResNet with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10−6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC. Confidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and ImageNet, using ResNet-18, ResNet-18 and ResNet-50 respectively (95% confi- 3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017). dence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant. A.9 RELATIVE PREDICTIVE CODING ON SPEECH For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . ,ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled fromN , a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h)>Wkct), where Wk is a learnable linear transformation defined separately for each k ∈ {1, ...,K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as: `RPCt,k = −(fk(ht+k, ct)− α |N | ∑
1. What are the key contributions and novel aspects introduced by the paper in contrastive learning? 2. What are the strengths of the paper, particularly in its empirical and theoretical support? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the weaknesses of the paper compared to prior works?
Review
Review The authors provide a clear review of different divergences used in contrastive learning and their relative strengths and weaknesses in terms of training stability, minibatch size dependence, and usefulness on downstream tasks. This motivates the need for a new divergence which they introduce based upon chi-squared divergence. They provide strong empirical and theoretical support for the new divergence, with extensive experiments on large-scale image and speech classification tasks. They also perform comparison studies across batch size and training stability that support their earlier arguments, and a hyperparameter sweep across term weights to make it clearer how to tune them in later work. Further, they demonstrate the decreased bias and variance in MI estimation experiments. The paper is well-written, and provides helpful context to not just motivate the value of the new technique, but quantitatively and qualitatively contrast with existing techniques that helps inform the reader about the broader field.
ICLR
Title Self-supervised Representation Learning with Relative Predictive Coding Abstract This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1. 1 INTRODUCTION Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivière et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019). Prior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods. The first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, SimCLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC 1Project page: https://github.com/martinmamql/relative_predictive_coding is the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020). This paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a `2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019). To empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015) and speech recognition on LibriSpeech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance. 2 PROPOSED METHOD This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix. Notation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X ). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) ∼ PXY with PXY being the joint distribution of X × Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) ∼ PXPY with PXPY being the product of marginal distributions overX×Y . Last, we define f ∈ F for F being any class of functions f : X ×Y → R. 2.1 PRELIMINARY Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair of representations (x, y) from their joint distribution ((x, y) ∼ PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) ∼ PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1. We instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY ‖PXPY ) with DKL referring to the KL-divergence: JCPC(X,Y ) := sup f∈F E(x,y1)∼PXY ,{yj}Nj=2∼PY [ log ef(x,y1) 1 N ∑N j=1 e f(x,yj) ] . (1) Then, Oord et al. (2018) presents to maximize JCPC(X,Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine(·) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant. The category of modeling DKL(PXY ‖PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X,Y ) = JNWJ(X,Y ) = DKL(PXY ‖PXPY ). The other divergence measurements considered in prior work are DJS(PXY ‖PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY ‖PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY ‖PXPY ) is the Jensen-Shannon f-GAN objective( JJS (Nowozin et al., 2016; Hjelm et al., 2018) ) , where JJS(X,Y ) = 2 ( DJS(PXY ‖PXPY ) − log 2 ) .2 The instance of modeling DWass(PXY ‖PXPY ) is the Wasserstein Predictive Coding( JWPC (Ozair et al., 2019) ) , where JWPC(X,Y ) modifies JCPC(X,Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X × Y) to R, and thus FL ⊂ F . Ozair et al. (2019) shows that JWPC(X,Y ) is the lower bound of bothDKL(PXY ‖PXPY ) andDWass(PXY ‖PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks. After discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, · · · , yN}, with (x, y1) ∼ PXY , (x, yj>1) ∼ PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}Nj=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider 2JJS(X,Y ) achieves its supreme value when f∗(x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f∗(x, y) into JJS(X,Y ), we can conclude JJS(X,Y ) = 2(DJS(PXY ‖PXPY )− log 2). the instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1- Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/100 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). 2.2 RELATIVE PREDICTIVE CODING In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above: JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] , (2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a `2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma: Lemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x,y)−αβ r(x,y)+γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Lemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as Ĵ m,n RPC: Definition 1 (Ĵm,nRPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Then, Ĵm,nRPC = supfθ∈FΘ 1 n ∑n i=1 fθ(xi, yi)− 1 m ∑m j=1 αfθ(x ′ j , y ′ j)− 1n ∑n i=1 β 2 f 2 θ (xi, yi)− 1m ∑m j=1 γ 2 f 2 θ (x ′ j , y ′ j). Proposition 1 (Boundedness of Ĵm,nRPC, informal) 0 ≤ JRPC ≤ 1 2β + α2 2γ . Then, with probability at least 1− δ, |JRPC − Ĵm,nRPC| = O( √ d+log (1/δ) n′ ), where n ′ = min {n,m}. Proposition 2 (Variance of Ĵm,nRPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[Ĵm,nRPC] = O ( c1 n + c2 m ) . From the two propositions, whenm and n are large, i.e., the sample sizes are large, Ĵm,nRPC is bounded, and its variance vanishes to 0. First, the boundedness of Ĵm,nRPC suggests Ĵ m,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than logN (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of Ĵm,nRPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY ‖PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X,Y ) as JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 − 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ. 3 EXPERIMENTS We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivière et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix. Datasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015). CIFAR-10/-100 and ImageNet contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider LibriSpeech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16kHz English speech from 251 speakers with 41 types of phonemes. Training and Evaluation Details. For the vision experiments, we follow the setup from SimCLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivière et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. . Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) ∼ PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. . Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. . Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data’s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data’s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor- malization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC. 3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (ResNet with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC. Regarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks. 3.2 TRAINING STABILITY We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the SimCLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to NaN and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better. 3.3 MINIBATCH SIZE SENSITIVITY We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train SimCLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivière et al. (2020) on LibriSpeech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings. 3.4 EFFECT OF RELATIVE PARAMETERS We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train SimCLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α ∈ {0, 0.001, 1.0}, β ∈ {0, 0.001, 1.0}, γ ∈ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of `2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on ImageNet (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10. 3.5 RELATION TO MUTUAL INFORMATION ESTIMATION The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X;Y ) = DKL(PXY ‖PXPY ). Lemma 1 states that given optimal solution f∗(x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1−βf∗(x,y) − γ β . We can empirically estimate r̂(x, y) from the estimated f̂(x, y) via this transformation, and use r̂(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X;Y ) ≈ 1n ∑n i=1 log r̂(xi, yi) with (xi, yi) ∼ P⊗nX,Y , where P ⊗n X,Y is the uniformly sampled empirical distribution of PX,Y . We follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X;Y ). Then, we perform a cubic transformation on y so that y 7→ y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X;Y ) = −10log (1 − ρ2). The models are trained on 20, 000 steps with I(X;Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (DoE) (McAllester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. DoE (McAllester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the DoE method. We would like to highlight our method’s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information. 4 RELATED WORK As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input’s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from its upper part (ImageGPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem. Theoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations. Our work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P ‖Q), these approaches consider D(P ‖αP + (1 − α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P ‖Q) = 0.5DKL(P ‖ 0.5P + 0.5Q) + 0.5DKL(Q ‖ 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations. 5 CONCLUSION In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning. ACKNOWLEDGEMENT This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIA’s GPU support and Cloud TPU support from Google’s TensorFlow Research Cloud (TFRC). A APPENDIX A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let JRPC(X,Y ) := sup f∈F EPXY [f(x, y)]−αEPXPY [f(x, y)]− β 2 EPXY [ f2(x, y) ] −γ 2 EPXPY [ f2(x, y) ] and r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x, y)− α β r(x, y) + γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . Proof: The second-order functional derivative of the objective is −βdPX,Y − γdPXPY , which is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative ∂JRPC∂m and set it to zero: dPX,Y − α · dPXPY − β · f(x, y) · dPX,Y − γ · f(x, y) · dPXPY = 0. We then get f∗(x, y) = dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY = p(x, y)− αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y)− α βr(x, y) + γ . Since 0 ≤ r(x, y) ≤ ∞, we have −αγ ≤ r(x,y)−α βr(x,y)+γ ≤ 1 β . Hence, ∀β 6= 0, γ 6= 0, f∗(x, y) := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β . A.2 RELATION BETWEEN JRPC AND Dχ2 In this subsection, we aim to show the following: 1) Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1; and 2) JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] by having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Lemma 3 Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)]− 1 Proof: By definition (Nielsen & Nock, 2013), Dχ2(PXY ‖PXPY ) = ∫ (dPXY )2 dPXPY − 1 = ∫ ( dPXY dPXPY )2 dPXPY − 1 = ∫ ( p(x, y) p(x)p(y) )2 dPXPY − 1 = ∫ r2(x, y)dPXPY − 1 = EPXPY [r2(x, y)]− 1. Lemma 4 Defining P ′ = ββ+γPXY + γ β+γPXPY as a mixture distribution of PXY and PXPY , JRPC(X,Y ) = β+γ 2 EP ′ [r 2 α,β,γ(x, y)]. Proof: Plug in the optimal solution f∗(x, y) = dPX,Y −α·dPXPYβ·dPX,Y +γ·dPXPY (see Lemma 2) into JRPC: JRPC = EPXY [f∗(x, y)]− αEPXPY [f∗(x, y)]− β 2 EPXY [ f∗2(x, y) ] − γ 2 EPXPY [ f∗2(x, y) ] = ∫ f∗(x, y) · ( dPXY − α · dPXPY ) − 1 2 f∗2(x, y) · ( β · dPXY + γ · dPXPY ) = ∫ dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ( dPXY − α · dPXPY ) − 1 2 ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = β + γ 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β β + γ · dPXY + γ β + γ · dPXPY ) . Since we define rα,β,γ = dPX,Y −α·dPXPY β·dPX,Y +γ·dPXPY and P ′ = ββ+γPXY + γ β+γPXPY , JRPC = β + γ 2 EP ′ [r2α,β,γ(x, y)]. A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT The proof contains two parts: showing 0 ≤ JRPC ≤ 12β + α2 2γ (see Section A.3.1) and Ĵ m,n RPC is a consistent estimator for JRPC (see Section A.3.2). A.3.1 BOUNDNESS OF JRPC Lemma 5 (Boundness of JRPC) 0 ≤ JRPC ≤ 12β + α2 2γ Proof: Lemma 4 suggests JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] with P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X,Y ) ≥ 0. We leverage the intermediate results in the proof of Lemma 4: JRPC(X,Y ) = 1 2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1 2 ∫ dPX,Y ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) − α 2 ∫ dPXPY ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) = 1 2 EPXY [rα,β,γ(x, y)]− α 2 EPXPY [rα,β,γ(x, y)]. Since −αγ ≤ rα,β,γ ≤ 1 β , JRPC(X,Y ) ≤ 1 2β + α2 2γ . A.3.2 CONSISTENCY We first recall the definition of the estimation of JRPC: Definition 2 (Ĵm,nRPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, Ĵm,nRPC = sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j). Our goal is to show that Ĵm,nRPC is a consistent estimator for JRPC. We begin with the following definition: Ĵm,nRPC,θ := 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) (3) and E [ ĴRPC,θ ] := EPXY [fθ(x, y)]−αEPXPY [fθ(x, y)]− β 2 EPXY [f2θ (x, y)]− γ 2 EPXPY [f2θ (x, y)]. (4) Then, we follow the steps: • The first part is about estimation. We show that, with high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. • The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ∗ such that E [ ĴRPC,θ∗ ] is close to JRPC. Part I - Estimation: With high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows: Assumption 1 (boundness of fθ) There exist universal constants such that ∀fθ ∈ FΘ, CL ≤ fθ ≤ CU . For notations simplicity, we let M = CU − CL be the range of fθ and U = max {|CU |, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = −αγ and CU = 1 β since the optimal function f ∗ has −αγ ≤ f ∗ ≤ 1β . Assumption 2 (smoothness of fθ) There exists constant ρ > 0 such that ∀(x, y) ∈ (X × Y) and θ1, θ2 ∈ Θ, |fθ1(x, y)− fθ2(x, y)| ≤ ρ|θ1 − θ2|. Now, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998): Lemma 6 (Estimation) Let > 0 and N (Θ, ) be the covering number of Θ with radius . Then, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Proof: For notation simplicity, we define the operators • P (f) = EPXY [f(x, y)] and Pn(f) = 1n ∑n i=1 f(xi, yi) • Q(f) = EPXPY [f(x, y)] and Qm(f) = 1m ∑m j=1 f(x ′ j , y ′ j) Hence,∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ = ∣∣Pn(fθ)− P (fθ)− αQm(fθ) + αQ(fθ)− βPn(f2θ ) + βP (f2θ )− γQm(f2θ ) + γQ(f2θ )∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ Let ′ = 4ρ ( 1+α+2(β+γ)U ) and T := N (Θ, ′). LetC = {fθ1 , fθ2 , · · · , fθT }with {θ1, θ2, · · · , θT } be such that B∞(θ1, ′), · · · , B∞(θT , ′) are ′ cover. Hence, for any fθ ∈ FΘ, there is an fθk ∈ C such that ‖θ − θk‖∞ ≤ ′. Then, for any fθk ∈ C:∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ ≤ |Pn(fθk)− P (fθk)|+ |Pn(fθ)− Pn(fθk)|+ |P (fθ)− P (fθk)| + α ( |Qm(fθk)−Q(fθk)|+ |Qm(fθ)−Qm(fθk)|+ |Q(fθ)−Q(fθk)| ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ ∣∣Pn(f2θ )− Pn(f2θk)∣∣+ ∣∣P (f2θ )− P (f2θk)∣∣ ) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ ∣∣Qm(f2θ )−Qm(f2θk)∣∣+ ∣∣Q(f2θ )−Q(f2θk)∣∣ ) ≤ |Pn(fθk)− P (fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ + α ( |Qm(fθk)−Q(fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ ) + β ( ∣∣Pn(f2θk)− P (f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) + γ ( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) = |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ + 2ρ ( 1 + α+ 2(β + γ)U ) ‖θ − θk‖ ≤ |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 , where • |Pn(fθ)− Pn(fθk)| ≤ ρ‖θ − θk‖ due to Assumption 2, and the result also applies for |P (fθ)− P (fθk)|, |Qm(fθ)−Qm(fθk)|, and |Q(fθ)−Q(fθk)|. • ∣∣Pn(f2θ )− Pn(f2θk)∣∣ ≤ 2‖fθ‖∞ρ‖θ−θk‖ ≤ 2ρU‖θ−θk‖ due to Assumptions 1 and 2. The result also applies for ∣∣P (f2θ )− P (f2θk)∣∣, ∣∣Qm(f2θ )−Qm(f2θk)∣∣, and ∣∣Q(f2θ )−Q(f2θk)∣∣. Hence, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ≥ ) = Pr ( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2 ) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2) ≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) + Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) + Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8)+ Pr(γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) . With Hoeffding’s inequality, • Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) ≤ 2exp ( − n 2 32M2 ) • Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) ≤ 2exp ( − m 2 32M2α2 ) • Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8) ≤ 2exp(− n 232U2β2) • Pr ( γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) ≤ 2exp(− m 232U2γ2) To conclude, Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ ) ≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) . Part II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network Lemma 7 (Approximation (Hornik et al., 1989)) Let > 0. There exists d ∈ N and a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where Θ is compact, such that inf fθ∈FΘ ∣∣∣E[ĴRPC,θ]− JRPC∣∣∣ ≤ . Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ∗ such that, with high probability, Ĵm,nRPC,θ can approximate JRPC with n′ = min {n,m} at a rate of O(1/ √ n′): Proposition 3 With probability at least 1 − δ, ∃θ∗ ∈ Θ, |JRPC − Ĵm,nRPC,θ| = O( √ d+log (1/δ) n′ ), where n′ = min {n,m}. Proof: The proof follows by combining Lemma 6 and 7. First, Lemma 7 suggests, ∃θ∗ ∈ Θ,∣∣∣E[ĴRPC,θ∗]− JRPC∣∣∣ ≤ 2 . Next, we perform analysis on the estimation error, aiming to find n,m and the corresponding probability, such that ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ∗]∣∣∣ ≤ 2 . Applying Lemma 6 with the covering number of the neural network: ( N (Θ, ) = O ( exp ( d log (1/ ) )) (Anthony & Bartlett, 2009) ) and let n′ = min{n,m}: Pr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ 2 ) ≤2N (Θ, 8ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 128M2 ) + exp ( − m 2 128M2α2 ) + exp ( − n 2 128U2β2 ) + exp ( − m 2 128U2γ2 )) =O ( exp ( d log (1/ )− n′ 2 )) , where the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1− δ, we solve the such that exp ( d log (1/ )− n′ 2 ) ≤ δ. With log (x) ≤ x− 1, n′ 2 + d( − 1) ≥ n′ 2 + dlog ≥ log (1/δ), where this inequality holds when = O (√ d+ log (1/δ) n′ ) . A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT Here, we provide the variance analysis on Ĵm,nRPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ∗ satisfying f∗(x, y) = fθ∗(x, y) = rα,β,γ(x, y). Then we recall that Ĵ m,n RPC is a consistent estimator of J RPC (see Proposition 3), and under regular conditions, the estimated network parameter θ̂ in Ĵm,nRPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of Ĵm,nRPC,θ in equation 3 and let n ′ = min{n,m}, the asymptotic expansion of Ĵm,nRPC has Ĵm,nRPC,θ∗ = Ĵ m,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + o(‖θ∗ − θ̂‖) = Ĵm,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + op( 1√ n′ ) = Ĵm,n RPC,θ̂ + op( 1√ n′ ), (5) where ˙̂Jm,n RPC,θ̂ = 0 since θ̂ is the estimation from Ĵm,nRPC = sup fθ∈FΘ Ĵm,nRPC,θ. Next, we recall the definition in equation 4: E[ĴRPC,θ̂] = EPXY [fθ̂(x, y)]− αEPXPY [fθ̂(x, y)]− β 2 EPXY [f2θ̂ (x, y)]− γ 2 EPXPY [f2θ̂ (x, y)]. Likewise, the asymptotic expansion of E[ĴRPC,θ] has E[ĴRPC,θ̂] = E[ĴRPC,θ∗ ] + E[ ˙̂ JRPC,θ∗ ](θ̂ − θ∗) + o(‖θ̂ − θ∗‖) = E[ĴRPC,θ∗ ] + E[ ˙̂JRPC,θ∗ ](θ̂ − θ∗) + op( 1√ n′ ) = E[ĴRPC,θ∗ ] + op( 1√ n′ ), (6) where E[ ˙̂JRPC,θ∗ ] = 0 since E[ĴRPC,θ∗ ] = JRPC and θ∗ satisfying f∗(x, y) = fθ∗(x, y). Combining equations 5 and 6: Ĵm,n RPC,θ̂ − E[ĴRPC,θ̂] =Ĵ m,n RPC,θ∗ − JRPC + op( 1√ n′ ) = 1 n n∑ i=1 f∗θ (xi, yi)− α 1 m m∑ j=1 f∗θ (x ′ j , y ′ j)− β 2 1 n n∑ i=1 f2θ∗(xi, yi)− γ 2 1 m m∑ j=1 f2θ∗(x ′ j , y ′ j) − EPXY [f∗(x, y)] + αEPXPY [f∗(x, y)] + β 2 EPXY [ f∗2(x, y) ] + γ 2 EPXPY [ f∗2(x, y) ] + op( 1√ n′ ) = 1 n n∑ i=1 rα,β,γ(xi, yi)− α 1 m m∑ j=1 rα,β,γ(x ′ j , y ′ j)− β 2 1 n n∑ i=1 r2α,β,γ(xi, yi)− γ 2 1 m m∑ j=1 r2α,β,γ(x ′ j , y ′ j) − EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β 2 EPXY [ r2α,β,γ(x, y) ] + γ 2 EPXPY [ r2α,β,γ(x, y) ] + op( 1√ n′ ) = 1√ n · 1√ n n∑ i=1 ( rα,β,γ(xi, yi)− β 2 r2α,β,γ(xi, yi)− EPXY [ rα,β,γ(x, y)− β 2 r2α,β,γ(x, y) ]) − 1√ m · 1√ m m∑ j=1 ( αrα,β,γ(x ′ j , y ′ j) + γ 2 r2α,β,γ(x ′ j , y ′ j)− EPXPY [ αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y) ]) + op( 1√ n′ ). Therefore, the asymptotic Variance of Ĵm,nRPC is Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ). First, we look at VarPXY [rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y)]. Since β > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − 2αγ+βα 2 2γ2 ≤ rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y) ≤ 12β . Hence, VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} . Next, we look at VarPXPY [αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y)]. Since α ≥ 0, γ > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − α2 2γ ≤ αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y) ≤ 2αβ+γ 2β2 . Hence, VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Combining everything together, we restate the Proposition 2 in the main text: Proposition 4 (Asymptotic Variance of Ĵm,nRPC) Var[Ĵm,nRPC] = 1 n VarPXY [rα,β,γ(x, y)− β 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ) ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} + o( 1 n′ ) A.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ As discussed in Assumption 1, for the estimation Ĵm,nRPC, we can bound the function fθ in FΘ within [−αγ , 1 β ] without losing precision. Then, re-arranging Ĵ m,n RPC: sup fθ∈FΘ 1 n n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) sup fθ∈FΘ 1 n n∑ i=1 ( fθ(xi, yi)− β 2 f2θ (xi, yi) ) + 1 m n∑ j=m ( αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j) ) Then, since −αγ ≤ fθ(·, ·) ≤ 1 β , basic calculations give us −2αγ + βα 2 2γ2 ≤ fθ(xi, yi)− β 2 f2θ (xi, yi) ≤ 1 2β and −α 2 2γ ≤ αfθ(x′j , y′j)+ γ 2 f2θ (x ′ j , y ′ j) ≤ 2αβ + γ 2β2 . The resulting variances have Var[fθ(xi, yi)− β 2 f2θ (xi, yi)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} and Var[αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . Taking the mean of m,n independent random variables gives the result: Proposition 5 (Variance of Ĵm,nRPC) Var[Ĵm,nRPC] ≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} . A.6 IMPLEMENTATION OF EXPERIMENTS For visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_CrossModal.. A.7 RELATIVE PREDICTIVE CODING ON VISION The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x′2k−1 and x′2k. Then a base encoder f(·) implemented using ResNet (He et al., 2016) will extract representations from augmented views, creating representations h2k−1 and h2k. Later a small neural network g(·) called projection head will map h2k−1 and h2k to z2k−1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x′2k−1 and x ′ 2k and 2(N − 1) negative samples. The RPC loss between a pair of positive views, x′i and x ′ j (augmented from the same image) , can be calculated by the substitution fθ(x ′ i,x ′ j) = (zi · zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC: `RPCi,j = −(si,j − α 2(N − 1) 2N∑ k=1 1[k 6=i]si,k − β 2 s2i,j − γ 2 · 2(N − 1) 2N∑ k=1 1[k6=i]s 2 i,k) (7) For losses other than RPC, a hidden normalization of si,j is often required by replacing zi · zj with (zi ·zj)/|zi||zj |. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization. A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS ImageNet Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer ResNet with selective kernels (SK) (Li et al., 2019) and 2× wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC’s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer ResNet and larger 152-layer ResNet respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for ResNet-50 and ResNet-152 respectively. CIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use ResNet (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 for standard 50-layer ResNet , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0. STL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a ResNet with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10−6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC. Confidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and ImageNet, using ResNet-18, ResNet-18 and ResNet-50 respectively (95% confi- 3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017). dence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant. A.9 RELATIVE PREDICTIVE CODING ON SPEECH For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . ,ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled fromN , a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h)>Wkct), where Wk is a learnable linear transformation defined separately for each k ∈ {1, ...,K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as: `RPCt,k = −(fk(ht+k, ct)− α |N | ∑
1. What is the focus and contribution of the paper on self-supervised training? 2. What are the strengths of the proposed approach, particularly in terms of training stability and downstream task performance? 3. What are the weaknesses of the paper, especially regarding the experimental design and the limitation of the proposed method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the proposed method or exploring its application in other contexts?
Review
Review Authors propose RPC (Relative Predictive Coding), which is supposed to improve training stability (with chi-squared distance based regularization), minibatch size sensitivity (avoid sampling large batches), and downstream task performance (show generalization). Authors discuss estimation of MI and/or probability ratio (of related divided by unrelated egs). Proposed solution stably estimates this. Experiments are convincing. It is a good direction in self-supervised training with convenient training schemes. Some weaknesses: Fix alpha=0 and find ratio or values of beta and gamma which gives maximum performance. It would be interesting since low value of alpha is also giving good performance. While discussing sensitivity to batch size, larger batch sizes should be tried since it is discussed in the initial part of paper that SimCLRv2 requires huge batch size. Since the proposal is generic, can authors give a word on using this on something other than SimCLRv2? Rather than reporting specific values of alpha, beta, gamma (the proposed "relative parameters"), if results can be reported in graph format, it would be vastly more helpful. For e.g. fix alpha=0.001 and x and y axis of plot could be other two relative parameters. (this is related to my point 1)
ICLR
Title Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning Abstract Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. N/A Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. 1 INTRODUCTION Recent work in deep reinforcement learning effectively tackles challenging problems including the board game Go (Silver et al., 2016), Atari video games (Mnih et al., 2015), and simulated robotic continuous control (Lillicrap et al., 2016); however, these successful approaches often rely on frequent feedback indicating whether the learning agent is performing well, otherwise known as dense rewards. In many tasks, dense rewards can be difficult to specify without inducing locally optimal but globally sub-optimal behavior. As such, it is frequently desirable to specify only a sparse reward that simply signals whether an agent has attained success or failure on a given task. Despite their desirability, sparse rewards introduce their own set of challenges. When rewards are sparse, determining which of an agent’s actions led to a reward becomes more difficult, a phenomenon known in reinforcement learning as the credit-assignment problem. Furthermore, if rewards cannot be obtained by random actions, an agent will never receive a signal through which it can begin learning. As such, researchers have devised methods which attempt to provide agents with additional reward signals, known as intrinsic rewards, through which they can learn meaningful behavior (Oudeyer & Kaplan, 2009). A large subset of these works focus on learning intrinsic rewards that encourage exploration of the state space (Pathak et al., 2017; Houthooft et al., 2016; Burda et al., 2019; Ostrovski et al., 2017; Tang et al., 2017). Exploring the state space provides a useful inductive bias for many sparse reward problems where the challenge lies in ”finding” rewards that may only be obtained in parts of the state space that are hard to reach by random exploration. These exploration-focused approaches frequently formulate their intrinsic rewards to measure the ”novelty” of a state, such that agents are rewarded for taking actions that lead to novel states. Our work approaches the question of how to apply novelty-based intrinsic motivation in the cooperative multi-agent setting. Directly applying novelty-based intrinsic motivation to the multi-agent setting results in agents each exploring their shared state space independently from one another. In many cases, independent exploration may not be the most efficient method. For example, consider a task where multiple agents are placed in a maze and their goal is to collectively reach all of the landmarks that are spread out through the maze. It would be inefficient for the agents to explore the same areas redundantly. Instead, it would be much more sensible for agents to ”divide-and-conquer,” or avoid redundant exploration. Thus, an ideal intrinsic reward for this task would encourage such behavior; however, the same behavior would not be ideal for other tasks. For example, take the same maze but change the task such that all agents need to reach the same landmark. Divide-and-conquer would no longer be an optimal exploration strategy since agents only need to find one landmark and they all need to reach the same one. Cooperative multi-agent reinforcement learning can benefit from sharing information about exploration across agents; however, the question of what to do with that shared information depends on the task at hand. In order to improve exploration in cooperative multi-agent reinforcement learning, we must first identify what kinds inductive biases can potentially be useful for multi-agent tasks and then devise intrinsic reward functions that incorporate those biases. Then, we must find a way to allow our agents to adapt their exploration to the given task, rather than committing to one type of intrinsic reward function. In this work, we first introduce a candidate set of intrinsic rewards for multiagent exploration which hold differing properties with regards to how they explore the state space. Subsequently, we present a hierarchical method for simultaneously learning policies trained on different intrinsic rewards and selecting the policies which maximize extrinsic returns. Importantly, all policies are trained using a shared replay buffer, drastically improving the sample efficiency and effectiveness of learning in cooperative multi-agent tasks with sparse rewards. 2 RELATED WORK Single-Agent Exploration In order to solve sparse reward problems, researchers have long worked on improving exploration in reinforcement learning. To achieve these means, prior works commonly propose reward bonuses that encourage agents to reach novel states. In tabular domains, reward bonuses based on the inverse state-action count have been shown to be effective in speeding up learning (Strehl & Littman, 2008). In order to scale count-based approaches to large state spaces, many recent works have focused on devising pseudo state counts to use as reward bonuses (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017). Alternatively, some work has focused on defining intrinsic rewards for exploration based on inspiration from psychology (Oudeyer & Kaplan, 2009; Schmidhuber, 2010). These works use various measures of novelty as intrinsic rewards including: transition dynamics prediction error (Pathak et al., 2017), information gain with respect to a learned dynamics model (Houthooft et al., 2016), and random state embedding network distillation error (Burda et al., 2019). Multi-Agent Reinforcement Learning (MARL) Multi-agent reinforcement learning introduces several unique challenges that recent work has attempted to address. These challenges include: multi-agent credit assignment in cooperative tasks with shared rewards (Sunehag et al., 2018; Rashid et al., 2018; Foerster et al., 2018), non-stationarity of the environment in the presence of other learning agents (Lowe et al., 2017; Foerster et al., 2018; Iqbal & Sha, 2019), and learning of communication protocols between cooperative agents (Foerster et al., 2016; Sukhbaatar et al., 2016; Jiang & Lu, 2018). Exploration in MARL While the fields of exploration in RL and multi-agent RL are popular, relatively little work has been done at the intersection of both. Carmel & Markovitch (1997) consider exploration with respect to opponent strategies in competitive games, and Verbeeck et al. (2005) consider exploration of a large joint action space in a load balancing problem. Jaques et al. (2018) define an intrinsic reward function for multi-agent reinforcement learning that encourages agents to take actions which have the biggest effect on other agents’ behavior, otherwise referred to as ”social influence.” Agogino & Tumer (2008) Define metrics for evaluating the efficacy of reward functions in multi-agent domains. These works, while important, do not address the problem of exploring a large state space, and whether this exploration can be improved in multi-agent systems. A recent approach to collaborative evolutionary reinforcement learning (Khadka et al., 2019) shares some similarities with our approach. As in our work, the authors devise a method for learning a population of diverse policies with a shared replay buffer and dynamically selecting the best learner; however, their work is focused on single-agent tasks and does not incorporate any notion of intrinsic rewards. As such, this work is not applicable to sparse reward problems in MARL. 3 BACKGROUND Dec-POMDPs In this work, we consider the setting of decentralized POMDPs (Oliehoek et al., 2016), which are used to describe cooperative multi-agent tasks. A decentralized POMDP (DecPOMDP) is defined by a tuple: (S,A,T ,O,O ,R, n, γ). In this setting we have n total agents. S is the set of global states in the environment, while O = ⊗i∈{1...n}Oi is the set of joint observations for each agent and A = ⊗i∈{1...n}Ai is the set of possible joint actions for each agent. A specific joint action at one time step is denoted as a = {a1, . . . , an} ∈ A and a joint observation is o = {o1, . . . , on} ∈ O. T is the state transition function which defines the probability P (s′|s,a), and O is the observation function which defines the probability P (o|a, s′). R is the reward function which maps the combination of state and joint actions to a single scalar reward. Importantly, this reward is shared between all agents, so Dec-POMDPs always describe cooperative problems. Finally, γ is the discount factor which determines how much the agents should favor immediate reward over long-term gain. Soft Actor-Critic Our approach uses Soft Actor-Critic (SAC) (Haarnoja et al., 2018) as its underlying algorithm. SAC incorporates an entropy term in the loss functions for both the actor and critic, in order to encourage exploration and prevent premature convergence to a sub-optimal deterministic policy. The policy gradient with an entropy term is computed as follows: ∇θJ(πθ) = Es∼D,a∼π [ ∇θ log πθ(a|s) ( − log πθ(a|s) α +Qψ(s, a)− b(s) )] (1) where D is a replay buffer that stores past environment transitions, ψ are the parameters of the learned critic, b(s) is a state dependent baseline (e.g. the state value function V (s)), and α is a reward scale parameter determining the amount of entropy in an optimal policy. The critic is learned with the following loss function: LQ(ψ) = E(s,a,r,s′)∼D [ (Qψ(s, a)− y)2 ] (2) y = r(s, a) + γEa′∼π(s′) [ Qψ̄(s ′, a′)− log(πθ̄(a ′|s′)) α ] (3) where ψ̄ are the parameters of the target critic which is an exponential moving average of the past critics, updated as: ψ̄ ← (1− τ)ψ̄ + τψ, and τ is a hyperparameter that controls the update rate. Centralized Training with Decentralized Execution A number of works in deep multi-agent reinforcement learning have followed the paradigm of centralized training with decentralized execution (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). This paradigm allows for agents to train while sharing information (or incorporating information that is unavailable at test time) but act using only local information, without requiring communication which may be costly at execution time. Since most reinforcement learning applications use simulation for training, communication between agents during the training phase has a relatively lower cost. 4 INTRINSIC REWARD FUNCTIONS FOR MULTI-AGENT EXPLORATION In this section we present a set of intrinsic reward functions for exploration that incorporate information about what other agents have explored. These rewards assume that each agent (indexed by i) has a novelty function fi that determines how novel an observation is to it, based on its past experience. This function can be an inverse state visit count in discrete domains, or, in large/continuous domains, it can be represented by recent approaches for developing novelty-based intrinsic rewards in complex domains, such as random network distillation (Burda et al., 2019). Note that we assume that all agents share the same observation space so that each agent’s novelty function can operate on all other agents’ observations. In Table 1 we define the intrinsic rewards that we use in our experiments. INDEPENDENT rewards are analagous to single-agent approaches to exploration which define the intrinsic reward for an agent as the novelty of their new and own observation that occurs as a result of an action. The remainder of intrinsic reward functions that we consider use the novelty functions of other agents, in addition to their own, to further inform their exploration. MINIMUM rewards consider how novel all agents find a specific agent’s observation and rewards that agent based on the minimum of these novelties. This method leads to agents only being rewarded for exploring areas that no other agent has explored, which could be advantageous in scenarios where redundancy in exploration is not useful or even harmful. COVERING rewards agents for exploring areas that it considers more novel than the average agent. This reward results in agents shifting around the state space, only exploring regions as long as they are more novel to them than their average teammate. BURROWING rewards do the opposite, only rewarding agents for exploring areas that it considers less novel than the average agent. While seemingly counterintuitive, these rewards encourage agents to further explore areas they have already explored with the hope that they will discover new regions that few or no other agents have seen, which they will then consider less novel than average and continue to explore. As such, these rewards result in agents continuing to explore until they exhaust all possible intrinsic rewards from a given region (i.e. hit a dead end), somewhat akin to a depth-first search. LEADER-FOLLOWER uses burrowing rewards for the first agent, and covering rewards for the rest of the agents. This leads to an agent exploring a space thoroughly, and the rest of the agents following along and trying to cover that space. Note that these are not meant to be a comprehensive set of intrinsic reward functions applicable to all cooperative multi-agent tasks but rather a set of examples of how exploration can be centralized in order to take other agents into account. Our approach, described in the following sections, is agnostic to the type of intrinsic rewards used and, as such, can incorporate other reward types not described here, as long as they can be computed off-policy. 5 LEARNING POLICIES FOR MULTI-AGENT EXPLORATION For many tasks, it is impossible to know a priori which intrinsic rewards will be the most helpful one. Furthermore, the type of reward that is most helpful could change over the course of training if the task is sufficiently complex. In this section we present our approach for simultaneously learning policies trained with different types of intrinsic rewards and dynamically selecting the best one. Simultaneous Policy Learning In order to learn policies for various types of intrinsic rewards in parallel, we utilize a shared replay buffer and off-policy learning to maximize sample efficiency. In other words, we learn policies and value functions for all intrinsic reward types from all collected data, regardless of which policies it was collected by. This parallel learning is made possible by the fact that we can compute our novelty functions off-policy, given the observations for each agent after each environment transition, which are saved in a replay buffer. For each type of reward, we learn a different ”head” for our policies and critics. In other words, we learn a single network for each agent’s set of policies that shares early layers and branches out into different heads for each reward type. For critics, we learn a single network across all agents that shares early layers and branches out into separate heads for each agent and reward type. We learn separate heads for intrinsic and extrinsic rewards, as in Burda et al. (2019). We provide a diagram of our model architecture in Figure 1. We index agents by i ∈ {1 . . . n} and intrinsic reward types by j ∈ {1 . . .m} where m is the total number of intrinsic reward types that we are considering. The policy for agent i, trained using reward j (in addition to extrinsic rewards), is represented by πji . It takes as input agent i’s observation, oi, and outputs a distribution from which we can sample the action ai. The parameters of this policy are Θji = {θsharei , θ j i }, where θsharei is a shared base/input (for agent i) in a neural network and θ j i is a head/output specific to this reward type. The extrinsic critic for policy head πji is represented by Q ex i,j . It takes as input the global state s and the actions of all other agents a\i, and it outputs the expected returns under policy π j i for each possible action that agent i can take, given all other agents’ actions. The parameters of this critic are Ψexi,j = {ψshare, ψexi,j} where ψshare is a shared base across all agents and reward types. A critic with similar structure exists for predicting the intrinsic returns of actions taken by πji , represented by Qini,j , which uses the parameters: Ψ in i,j = {ψshare, ψini,j}. Note that the intrinsic critics share the same base parameters ψshare. We remove the symbols representing the parameters of the policies (Θ) and the critics (Ψ) for readability. In our notation we use the absence of a subscript or superscript to refer to a group. For example πj , refers to all agents’ policies trained on intrinsic reward j. We train our critics with the following loss function, adapted from soft actor-critic: LQ(Ψ) = E(s,o,a,r,s′,o′)∼D m∑ j=1 n∑ i=1 (Qexi,j(s,a)− yexi,j)2 + (Qini,j(s,a)− yini,j)2 (4) yexi,j = r ex(s,a) + γEa′∼π̄j(o′) [ Q̄exi,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (5) yini,j = r in i,j(o ′ i) + γEa′∼π̄j(o′) [ Q̄ini,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (6) where Q̄ refers to the target Q-function, an exponential weighted average of the past Q-functions, used for stability, and π̄ are similarly updated target policies. The intrinsic rewards laid out in Table 1 are represented as a function of the observations that results from the action taken, rini,j(o ′ i) where j specifies the type of reward. Importantly, we can calculate these loss functions for expected intrinsic and extrinsic returns for all policies given a single environment transition, allowing us to learn multiple policies for each agent in parallel. We train each policy head with the following gradient: ∇ΘjiJ(π j i ) = E(s,o)∼D,a∼πj [ ∇Θji log π j i (ai|oi) ( − log π j i (ai|oi) α +Aji (s,a) )] (7) Aji (s,a) = Q ex i,j(s,a) + βQ in i,j(s,a)− V j i (s) (8) V ji (s) = ∑ a′i∈Ai πji (a ′ i|oi)(Qexi,j(s, {a′i,a\i}) + βQini,j(s, {a′i,a\i})) (9) where β is a scalar that determines the weight of the intrinsic rewards, relative to extrinsic rewards, and Aji is a multi-agent advantage function (Foerster et al., 2018; Iqbal & Sha, 2019), used for helping with multi-agent credit assignment. Dynamic Policy Selection Now that we have established a method for simultaneously learning policies using different intrinsic reward types, we must devise a means of selecting between these policies. In order to select policies to use for environment rollouts, we must consider which policies maximize extrinsic returns, while taking into account the fact that there may still be ”unknown unknowns,” or regions that the agents have not seen yet where they may be able to further increase their extrinsic returns. As such, we must learn a meta-policy that, at the beginning of each episode, selects between the different sets of policies trained on different intrinsic rewards and maximizes extrinsic returns without collapsing to a single set of policies too early. We parameterized the selector policy Π with a vector, φ, that contains an entry for every reward type. The probability of sampling head j is: Π(j) ∝ exp(φ[j]). Unlike the action policies, this high-level policy does not take any inputs, a we simply want to learn which set of policies trained on the individual intrinsic reward functions has the highest expected extrinsic returns from the beginning of the episode. The most sensible metric for selecting policies is the expected extrinsic returns given by each policy head. We can use policy gradients to train the policy selector, Π, to maximize this value using the returns received when performing rollouts in the environment. We use the following gradient to train Π: ∇φJ(Π) = Eh∼Π [ ∇φ log Π(h) ( − log Π(h) η +Rexh − bΠ )] (10) Rexh = T∑ t=0 γtrex(st,at)|a ∼ πh(ot), bΠ = m∑ h′ Π(h′)µh′ (11) where µh is a running mean of the returns received by head h in the past, and η is a parameter similar to α for the low-level policies, which promotes entropy in the selector policy. Entropy in the policy selector is important in order to prevent it from collapsing onto a single exploration type that does well at first but does not continue to explore as effectively as others. As such, we can learn a diverse set of behaviors based on various multi-agent intrinsic reward functions and select the one that maximizes performance on the task at hand at any point during training, while continuing to consider other policies that may lead to greater rewards. 6 EXPERIMENTS We begin by describing our evaluation domains and then present experimental results which demonstrate the effectiveness of our approach. We provide additional details in the appendix and will share code for both the model and environments. We use a maximum of four agents in gridworld and two agents in VizDoom. We encode several tasks in both domains related to collecting the items (displayed in yellow in Figure 2) which each require different types of exploration: TASK 1 Agents must cooperatively collect all treasure on the map in order to complete the task; TASK 2 Agents must all collect the same treasure. The first agent to collect a treasure during an episode determines the goal for the rest of the agents. TASK 3 Agents must all collect the specific treasure that is assigned to them. The two agent version of each task uses agents 1-2 and treasure A-B, while the three agent versions use 1-3, A-C, and the four agent versions use 1-4, A-D. Agents receive a negative time penalty towards extrinsic rewards at each step, so they are motivated to complete the task as quickly as possible. The only positive extrinsic reward comes from any agent collecting a treasure that is allowed by the specific task, and rewards are shared between all agents. The optimal strategy in TASK 1 is for agents to spread out and explore separate portions of the map, while in TASK 2 they should explore the same areas, and in TASK 3 they should explore independently. 6.1 GRIDWORLD DOMAIN We first test our approach using a multi-agent gridworld domain (pictured in Fig. 2a), which allows us to design environments where the primary challenge lies in a combination of exploring the state space efficiently and coordinating behaviors. The environment includes two sources of stochasticity: random transitions and black holes. At each step there is a 10% chance of an agent’s action being replaced by a random one. Furthermore, there are several ”black holes” placed around the map which have a probability of opening at each time step. This probability changes at each step using a biased random walk such that it moves toward one, until the hole opens and it resets to zero. If an agent steps into a black hole when it is open, they will be sent back to their starting position. The spaces colored as black are holes that are currently open, while the gray spaces are holes that have the possibility of opening at the next step (the darker they are, the higher the probability). We set the rate of black holes dropping out to be higher in TASK 1 than the other 2 tasks, in order to balance the difficulty. The novelty function for each agent fi, which is used for calculating the intrinsic rewards in Table 1, is defined as 1 Nζ , where N is the number of times that the agent has visited its current cell and ζ is a decay rate selected as a hyperparameter (we find that ζ = 0.7 works well for our purposes). 6.2 VIZDOOM DOMAIN In order to test our method’s ability to scale to more complex environments with similarly challenging exploration tasks, we implement tasks analogous to those in our gridworld environment (i.e. extrinsic rewards are defined identically) in the VizDoom framework (Kempka et al., 2016). We use the ”My Way Home” map, which has been used as a test bed for single agent exploration techniques (Pathak et al., 2017), and modify it for multi-agent tasks (pictured in Figure 2b). Since the agents are moved to a central location closer to their rewards than in the original map, we lower the action repeat from 4 to 2, in order to force agents to take twice as many steps in order to explore the same areas, maintaining the challenging nature of exploration in the original task. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. We again find that ζ = 0.7 to work well in our experiments. 6.3 MAIN RESULTS Figure 3a demonstrates the results of our approach over the course of training on the 2 agent version of TASK 1 in gridworld, and the final results on each task/agent/domain combination can be found in Table 2. The full training curves for all settings can be found in the appendix (Section A.4). We train a team of agents using each of the multi-agent intrinsic reward functions defined in Table 1 individually, and then test our dynamic policy selection approach. We find that our approach is competitive with, or outperforms the best performing individual exploration method in nearly all tasks. This performance is exciting since our method receives no prior information about which type of exploration would work best, while each type carries its own inductive bias. Notably our learned policy selector learns to select the policies trained on intrinsic rewards that do well individually on the tasks. For instance, on TASK 1 with 2 agents, we find that our policy selector consistently selects BURROWING and MINIMUM rewards, the two best performing reward functions on that task. Furthermore, we find that our results on the more complex VizDoom domain mirror those in the gridworld, indicating that our methods are not limited to discrete domains, assuming that a reliable way for measuring the novelty of observations exists. Interestingly, our approach is sometimes able to significantly surpass the performance of the best individual reward function on TASK 3. This task requires agents to collect the specific reward assigned to them, so we expect independent exploration to be the most effective; however, exploration types that perform ”divide-and-conquer” type behavior such as BURROWING and MINIMUM have the potential to drastically speed up the exploration process if they happen to divide the space correctly, leading to a stark success-failure contrast in runs of these types. Since our method MULTI can select policies trained on these rewards, and otherwise fall back on INDEPENDENT policies if they are not working, we find that our method is able to surpass all individual reward types. We find that our approach is unable to match the performance of the best individual method on TASK 2 in some settings (gridworld with 3 agents and VizDoom). This lack of success may be an indication that these particular settings require commitment to a specific exploration strategy early on in training, highlighting a limitation of our approach. Our method requires testing out all policies until we find one that reaches high extrinsic rewards, which can dilute the effectiveness of exploration early on. 6.4 ANALYSIS Characteristics of Different Intrinsic Rewards In order to better understand how each reward type encourages agents to explore the state space, we visualize their exploration in videos, viewable at the anonymized link below.1. INDEPENDENT rewards, as expected, result in agents exploring the whole state space without taking other agents into consideration. As a result, on TASK 1, which 1https://sites.google.com/view/multi-exploration-iclr2020/home GRIDWORLD 1 2 0.14 ± 0.05 1.62 ± 0.59 0.13 ± 0.12 1.98 ± 0.06 0.18 ± 0.24 2.00 ± 0.00 3 1.16 ± 0.11 1.49 ± 0.76 0.00 ± 0.00 2.06 ± 1.05 0.34 ± 0.45 2.23 ± 0.73 4 0.84 ± 0.29 1.78 ± 0.44 0.00 ± 0.00 1.90 ± 0.49 1.17 ± 0.39 2.04 ± 0.61 2 2 2.00 ± 0.00 0.92 ± 0.10 1.11 ± 0.99 0.98 ± 0.05 1.73 ± 0.66 1.83 ± 0.41 3 2.66 ± 0.80 1.11 ± 0.29 0.54 ± 0.80 1.80 ± 0.29 3.00 ± 0.00 1.80 ± 0.71 4 1.83 ± 1.08 0.93 ± 0.13 0.22 ± 0.18 1.99 ± 0.67 2.66 ± 2.06 2.54 ± 1.21 3 2 1.39 ± 0.94 0.67 ± 1.03 0.29 ± 0.37 0.67 ± 1.03 0.83 ± 0.67 2.00 ± 0.00 3 1.68 ± 0.70 0.60 ± 0.73 0.09 ± 0.08 1.35 ± 1.16 1.59 ± 0.83 2.21 ± 0.91 4 1.12 ± 0.47 1.36 ± 0.71 0.05 ± 0.05 2.14 ± 1.49 0.68 ± 0.53 1.73 ± 0.47 VIZDOOM 1 2 0.94 ± 0.54 1.57 ± 0.74 0.16 ± 0.17 1.94 ± 0.10 0.61 ± 0.43 1.98 ± 0.03 2 1.52 ± 0.75 1.53 ± 0.74 0.70 ± 1.00 0.63 ± 0.04 1.93 ± 0.10 1.23 ± 0.65 3 0.18 ± 0.19 0.64 ± 1.05 0.45 ± 0.46 0.29 ± 0.25 0.20 ± 0.17 1.64 ± 0.63 requires coordination between agents to spread out and explore different areas, INDEPENDENT rewards struggle; however, on TASK 3, where agents receive individualized goals, independent exploration usually performs better, relative to the other methods. TASK 2 also requires coordination, but the rate of black holes dropping out in the gridworld version is lower on that task, making exploration easier. As a result, INDEPENDENT rewards perform well on TASK 2; however, we find that LEADER-FOLLOWER also performs well on this task, expecially when more agents are involved, indicating that these rewards do a good job of biasing agents toward exploring similar regions of the environment. MIMIMUM rewards prevent agents from exploring the same regions redundantly but can lead to situations where one of the agents is the first to explore all regions that provide sparse extrinsic rewards. In these cases, other agents are not aware of the extrinsic rewards and are also not motivated to explore for them since another agent has already done so. COVERING rewards, as expected, lead to behavior where agents are constantly switching up the regions that they explore. While this behavior does not prove to be useful in the tasks we test since the switching slows down overall exploration progress, it may be useful in scenarios where agents are required to spread out. Finally, BURROWING rewards cause agents to each explore different subregions and continue to explore those regions until they exhaust their options. This behavior is particularly effective on TASK 1, where agents are best served by spreading out and exploring the whole map in a mutually exclusive fashion. Ablations We compare to a baseline meta-policy which simply selects the action policies uniformly at random. We find that our approach is significantly superior to this baseline (see Figure 3b Multi (Uniform Meta-Policy)). Furthermore, we test a version of our method where all policies (with different random initializations) are trained on independent rewards (Multi (All Independent)). The purpose of this ablation is to test the degree to which the specific multi-agent intrinsic reward functions are helpful, as opposed to simply providing multiple options at each episode. Again, we find that our method outperforms the baseline, indicating that both aspects of our approach (diverse intrinsic reward functions which share information across agents, and a meta-policy selector that maximizes extrinsic rewards) are crucial for success in multi-agent exploration tasks. We perform two further ablations/comparisons. Results on task 1 with 2 agents in GridWorld are viewable in Figure 3b, and results on tasks 2 and 3 with 2 agents are viewable in the Appendix (A.5). In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. 7 CONCLUSION We propose a set of multi-agent intrinsic reward functions with differing properties, and compare them both qualitatively (through videos) and quantitatively on several multi-agent exploration tasks in both a gridworld domain as well as in VizDoom. Overall, we can see that cooperative multi-agent tasks can, in many cases, benefit from intrinsic rewards that take into account what other agents have explored, but there are various ways to incorporate that information, each with differing properties. As such, we propose a method for learning policies for all intrinsic reward types simultaneously while dynamically selecting the most effective ones. We show that our method is capable of matching or surpassing the performance of the best performing intrinsic reward type on various tasks while using the same number of samples collected from the environment. In future work we hope to introduce methods for directly learning the multi-agent intrinsic reward functions, rather than selecting from a set. A APPENDIX A.1 ENVIRONMENT DETAILS A.1.1 GRIDWORLD The black holes which send agents back to their starting positions if they are stepped into are an important aspect of the environment, as they add difficulty to exploration. The probability, ρ, of a black hole opening at each step, t, evolves as such: ρt+1 = ρt +N (µ, σ), where µ = σ = 0.05 for TASK 1 and µ = σ = 0.005 for 2 and 3. Agents observe their global position in (x, y) coordinates (scalars), as well as local information regarding walls in adjacent spaces, the probability of their adjacent spaces opening into a black hole, the relative position of other agents (if they are within 3 spaces), as well as information about which treasures the agent has already collected in the given episode. The global state is represented by the (x, y) coordinates of all agents, as one-hot encoded vectors for x and y separately, as well as the local information of all agents regarding black holes, walls, and treasures collected. Each agent’s action space consists of the 4 cardinal directions as well as an option to not move, which is helpful in cases where an agent is waiting for a black hole to be safe to cross. A.1.2 VIZDOOM Agents receive their egocentric view (Figure 2c) in the form of 48x48 grayscale images as observations along with an indicator of which agents (if any) have collected each reward, and we use a vector based global state which includes all agents’ (x, y) positions and velocities, their orientations, as well as the same indicator of which agent has collected each reward. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. There are 30 bins in the x dimension and 26 in the y dimension. (x, y) positions in the global state are represented both as scalars and one-hot vectors indicating which bin the agents are currently occupying. Each agent can choose from 3 actions at each time step: turn left, turn right, or go forward. A.2 TRAINING DETAILS The training procedure is detailed in Algorithm 1, and all hyperparameters are listed in Tables 3 and 4. Hyperparameters were selected by tuning one parameter at a time through intuition on task 1 with 2 agents and then applying to the rest of the settings with minimal changes. Where hyperparameters differ between settings, we make a footnote denoting them as such. Algorithm 1 Training Procedure for Multi-Explore w/ Soft Actor-Critic (Haarnoja et al., 2018) 1: Initialize environment with n agents 2: Initialize replay buffer, D 3: tupdate ← 0 4: tep ← max ep length 5: for t = 1 . . . total steps do 6: if episode done or tep == max ep length then 7: for j = 1 . . . num updates do 8: UPDATESELECTOR(R, h) . Eqs 10-11 in main text 9: end for 10: s,o← RESETENV 11: h ∼ Π . Sample policy head 12: tep ← 0 13: R← 0 14: end if 15: Select actions ai ∼ πhi (·|oi) for each agent, i 16: Send actions to environment and get s, o, r 17: R← R+ γtep r 18: Store transitions for all environments in D 19: tupdate+ = 1 20: tep+ = 1 21: if tupdate == steps per update then 22: for j = 1 . . . num updates do 23: Sample minibatch, B 24: UPDATECRITIC(B) . Eqs 4-6 in main text 25: UPDATEPOLICIES(B) . Eqs 7-9 in main text 26: Update target parameters: Ψ̄ = τΨ̄ + (1− τ)Ψ Θ̄ = τΘ̄ + (1− τ)Θ 27: end for 28: tupdate ← 0 29: end if 30: end for A.3 NETWORK ARCHITECTURES In this section we list, in pseudo-code, the architectures we used for all policies and critics A.3.1 GRIDWORLD θsharei (shared for policy heads): obs_size = observations.shape[1] fc1 = Linear(in_dim=obs_size, out_dim=128) nl1 = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] fc2 = Linear(in_dim=fc1.out_dim, out_dim=32) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_sim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=128) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=128) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.3.2 VIZDOOM θsharei (shared for policy heads belonging to one agent): # vector observation encoder vect_obs_size = vector_observations.shape[1] vect_fc = Linear(in_dim=obs_size, out_dim=32) vect_nl = ReLU() # image observation encoder img_obs_channels = image_observations.shape[1] pad1 = ReflectionPadding(size=1) conv1 = Conv2D(in_channels=img_obs_channels, out_channels=32, filter_size=3, stride=2) conv_nl1 = ReLU() pad2 = ReflectionPadding(size=1) conv2 = Conv2D(in_channels=conv1.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl2 = ReLU() pad3 = ReflectionPadding(size=1) conv3 = Conv2D(in_channels=conv2.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl3 = ReLU() pad4 = ReflectionPadding(size=1) conv4 = Conv2D(in_channels=conv3.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl4 = ReLU() conv_flatten = Flatten() # flatten output of conv layers conv_fc = Linear(in_dim=conv_flatten.out_dim, out_dim=128) conv_fc_nl = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] # takes concatenation of image and vector encodings as input fc_out1 = Linear(in_dim=conv_fc.out_dim + vect_fc.out_dim, out_dim=32) fc_out_nl = ReLU() fc_out2 = Linear(in_dim=fc_out1.out_dim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=256) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=256) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.4.1 GRIDWORLD A.4 TRAINING CURVES A.4.2 VIZDOOM A.5 MORE ABLATIONS In this section we consider two ablations/comparisons to our model across all three tasks in the 2 agent version of gridworld. In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method in any of the three tasks. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. A.6 ANALYZING META-POLICY In Figure 19 we analyze the behavior of the meta-policy in two separate runs. We evaluate on Task 3, since we find that our method is able to surpass the best individual reward function. This task assigns specific goals to each agent. As such, one might expect that independent exploration would work most effectively in this setting. While independent exploration is effective (see Figure 10), we find that our method can outperform it. In both runs, we find that burrowing rewards are selected when the agents finally learn how to solve the task; however, we find that burrowing rewards are not necessarily successful when deployed on their own. This lack of success is likely due to the fact that these rewards cause the agents to pick a region and commit to exploring it for the duration of training. As such, the agents may pick the ”wrong” region at first and never be able to recover. On the other hand, using our methods, the meta-policy can wait until the burrowing exploration regions align with the assigned rewards and then select the policies trained on these rewards. This usually ends up being more efficient than waiting for the agents to explore the whole map using independent rewards.
1. What is the main contribution of the paper regarding multi-agent exploration? 2. How does the proposed method handle coordination between agents at execution time? 3. Is the number of agents fixed at training time or can it be changed during execution? 4. Can you explain why the use of a dynamic policy selection is beneficial and how it compares to classical bandit algorithms? 5. Are there any limitations to the applicability of the proposed method when scaling up to more complex environments? 6. Do you have any concerns regarding the experimental setup and its relation to traditional evaluation setups in Vizdoom?
Review
Review Contribution: The paper proposes to use a set of handcrafted intrinsic rewards that depend on the novelty of an observation as perceived by the rest of the other agents. For each pair of reward and agent, they learn a policy and a value through actor critic method, and then a meta-policy choses at the beginning of each episode which intrinsic rewards to use, meaning that the policy used by the agents corresponds to the one that maximizes the reward chosen. Review: The major limitation of the paper in my opinion is the fact that the "coordination" that occurs here is only happening at training time, not at execution time. The agents eventually learn whatever trajectory they need to perform, and then proceed to do so without any interaction with the other agents. In a sense, they don't even learn to explore collaboratively. In other words, agents trained on task 1 in a given maze would not be able to solve task 2 on the same maze without essentially relearning everything from scratch. The other corollary of the fact that each agent learns its own policy is that the number of agents is fixed at training time, preventing testing with a different number of agents, as sometimes done in the literature ([1] [2]). Given this limitation the scope of the work basically reduces to the exploration of a fixed environment when the action space can be factored into different agents. This "multi-agent" formulation is presumably meant to break down the computational complexity of having a joint observation/action space. However, the experiments are conducted only with a very limited number of agents (only 2 in the non toy environment of vizdoom). This small scale doesn't, in my opinion, demonstrate the advantage of the decomposition of the MDP over say SOTA single-agent exploration methods applied to the cartesian product of all the agents action spaces (in vizdoom the paper considers only 3 actions, so with two agents it would amount to 9 actions, which is still very tractable). Once the trajectories of both agents are found, they can be distilled to each of them individually so that they only depend on the local observation. Regarding the experiments on the Vizdoom environment, it appears that the traditional evaluation setup [3] doesn't involve providing the global position (x,y) to the agents as part of the observations (they must be inferred from the visual feed), contrary to the experimental setup presented in this paper. In my opinion, this weakens the claim that the method "scales to more complex environments" since providing the position essentially makes the environment similar to a grid-world (arguably the visual feed isn't even needed to solve the task. The use of a dynamic policy selection is somewhat interesting, but would benefit better investigation. Firstly, it is not clear to me if all the selection of the policy to use during training affects all the trajectories of the batch, or if different episodes of the batch may have a different policy. Secondly, it seems that the setting is typically the one of a (non-stationary) bandit, since there is no state and the "reward" is the return obtained by the policy. Could you share the reason behind the choice of an actor-critic algorithm over classical bandit algorithms? One obvious advantage of the latter are provable regret bounds. In all, the selection policy seems to be useful during training, since it sometimes yields better solutions than any of the individual reward schemes. It suggests that some form of curriculum over the rewards is occurring during training, but if this is really what is going on, then it's possible that the relevant literature about curriculum learning may offer more stable and principled solutions than an actor critic, for example population based training. This could potentially solve the issues observed in task 2. [1] Relational Deep Reinforcement Learning, Zambaldi et al, https://arxiv.org/abs/1806.01830 [2] A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning, Carion et al, https://arxiv.org/abs/1910.08809 [3] Curiosity-driven Exploration by Self-supervised Prediction, Pathak et al, ICML 2017
ICLR
Title Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning Abstract Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. N/A Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. 1 INTRODUCTION Recent work in deep reinforcement learning effectively tackles challenging problems including the board game Go (Silver et al., 2016), Atari video games (Mnih et al., 2015), and simulated robotic continuous control (Lillicrap et al., 2016); however, these successful approaches often rely on frequent feedback indicating whether the learning agent is performing well, otherwise known as dense rewards. In many tasks, dense rewards can be difficult to specify without inducing locally optimal but globally sub-optimal behavior. As such, it is frequently desirable to specify only a sparse reward that simply signals whether an agent has attained success or failure on a given task. Despite their desirability, sparse rewards introduce their own set of challenges. When rewards are sparse, determining which of an agent’s actions led to a reward becomes more difficult, a phenomenon known in reinforcement learning as the credit-assignment problem. Furthermore, if rewards cannot be obtained by random actions, an agent will never receive a signal through which it can begin learning. As such, researchers have devised methods which attempt to provide agents with additional reward signals, known as intrinsic rewards, through which they can learn meaningful behavior (Oudeyer & Kaplan, 2009). A large subset of these works focus on learning intrinsic rewards that encourage exploration of the state space (Pathak et al., 2017; Houthooft et al., 2016; Burda et al., 2019; Ostrovski et al., 2017; Tang et al., 2017). Exploring the state space provides a useful inductive bias for many sparse reward problems where the challenge lies in ”finding” rewards that may only be obtained in parts of the state space that are hard to reach by random exploration. These exploration-focused approaches frequently formulate their intrinsic rewards to measure the ”novelty” of a state, such that agents are rewarded for taking actions that lead to novel states. Our work approaches the question of how to apply novelty-based intrinsic motivation in the cooperative multi-agent setting. Directly applying novelty-based intrinsic motivation to the multi-agent setting results in agents each exploring their shared state space independently from one another. In many cases, independent exploration may not be the most efficient method. For example, consider a task where multiple agents are placed in a maze and their goal is to collectively reach all of the landmarks that are spread out through the maze. It would be inefficient for the agents to explore the same areas redundantly. Instead, it would be much more sensible for agents to ”divide-and-conquer,” or avoid redundant exploration. Thus, an ideal intrinsic reward for this task would encourage such behavior; however, the same behavior would not be ideal for other tasks. For example, take the same maze but change the task such that all agents need to reach the same landmark. Divide-and-conquer would no longer be an optimal exploration strategy since agents only need to find one landmark and they all need to reach the same one. Cooperative multi-agent reinforcement learning can benefit from sharing information about exploration across agents; however, the question of what to do with that shared information depends on the task at hand. In order to improve exploration in cooperative multi-agent reinforcement learning, we must first identify what kinds inductive biases can potentially be useful for multi-agent tasks and then devise intrinsic reward functions that incorporate those biases. Then, we must find a way to allow our agents to adapt their exploration to the given task, rather than committing to one type of intrinsic reward function. In this work, we first introduce a candidate set of intrinsic rewards for multiagent exploration which hold differing properties with regards to how they explore the state space. Subsequently, we present a hierarchical method for simultaneously learning policies trained on different intrinsic rewards and selecting the policies which maximize extrinsic returns. Importantly, all policies are trained using a shared replay buffer, drastically improving the sample efficiency and effectiveness of learning in cooperative multi-agent tasks with sparse rewards. 2 RELATED WORK Single-Agent Exploration In order to solve sparse reward problems, researchers have long worked on improving exploration in reinforcement learning. To achieve these means, prior works commonly propose reward bonuses that encourage agents to reach novel states. In tabular domains, reward bonuses based on the inverse state-action count have been shown to be effective in speeding up learning (Strehl & Littman, 2008). In order to scale count-based approaches to large state spaces, many recent works have focused on devising pseudo state counts to use as reward bonuses (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017). Alternatively, some work has focused on defining intrinsic rewards for exploration based on inspiration from psychology (Oudeyer & Kaplan, 2009; Schmidhuber, 2010). These works use various measures of novelty as intrinsic rewards including: transition dynamics prediction error (Pathak et al., 2017), information gain with respect to a learned dynamics model (Houthooft et al., 2016), and random state embedding network distillation error (Burda et al., 2019). Multi-Agent Reinforcement Learning (MARL) Multi-agent reinforcement learning introduces several unique challenges that recent work has attempted to address. These challenges include: multi-agent credit assignment in cooperative tasks with shared rewards (Sunehag et al., 2018; Rashid et al., 2018; Foerster et al., 2018), non-stationarity of the environment in the presence of other learning agents (Lowe et al., 2017; Foerster et al., 2018; Iqbal & Sha, 2019), and learning of communication protocols between cooperative agents (Foerster et al., 2016; Sukhbaatar et al., 2016; Jiang & Lu, 2018). Exploration in MARL While the fields of exploration in RL and multi-agent RL are popular, relatively little work has been done at the intersection of both. Carmel & Markovitch (1997) consider exploration with respect to opponent strategies in competitive games, and Verbeeck et al. (2005) consider exploration of a large joint action space in a load balancing problem. Jaques et al. (2018) define an intrinsic reward function for multi-agent reinforcement learning that encourages agents to take actions which have the biggest effect on other agents’ behavior, otherwise referred to as ”social influence.” Agogino & Tumer (2008) Define metrics for evaluating the efficacy of reward functions in multi-agent domains. These works, while important, do not address the problem of exploring a large state space, and whether this exploration can be improved in multi-agent systems. A recent approach to collaborative evolutionary reinforcement learning (Khadka et al., 2019) shares some similarities with our approach. As in our work, the authors devise a method for learning a population of diverse policies with a shared replay buffer and dynamically selecting the best learner; however, their work is focused on single-agent tasks and does not incorporate any notion of intrinsic rewards. As such, this work is not applicable to sparse reward problems in MARL. 3 BACKGROUND Dec-POMDPs In this work, we consider the setting of decentralized POMDPs (Oliehoek et al., 2016), which are used to describe cooperative multi-agent tasks. A decentralized POMDP (DecPOMDP) is defined by a tuple: (S,A,T ,O,O ,R, n, γ). In this setting we have n total agents. S is the set of global states in the environment, while O = ⊗i∈{1...n}Oi is the set of joint observations for each agent and A = ⊗i∈{1...n}Ai is the set of possible joint actions for each agent. A specific joint action at one time step is denoted as a = {a1, . . . , an} ∈ A and a joint observation is o = {o1, . . . , on} ∈ O. T is the state transition function which defines the probability P (s′|s,a), and O is the observation function which defines the probability P (o|a, s′). R is the reward function which maps the combination of state and joint actions to a single scalar reward. Importantly, this reward is shared between all agents, so Dec-POMDPs always describe cooperative problems. Finally, γ is the discount factor which determines how much the agents should favor immediate reward over long-term gain. Soft Actor-Critic Our approach uses Soft Actor-Critic (SAC) (Haarnoja et al., 2018) as its underlying algorithm. SAC incorporates an entropy term in the loss functions for both the actor and critic, in order to encourage exploration and prevent premature convergence to a sub-optimal deterministic policy. The policy gradient with an entropy term is computed as follows: ∇θJ(πθ) = Es∼D,a∼π [ ∇θ log πθ(a|s) ( − log πθ(a|s) α +Qψ(s, a)− b(s) )] (1) where D is a replay buffer that stores past environment transitions, ψ are the parameters of the learned critic, b(s) is a state dependent baseline (e.g. the state value function V (s)), and α is a reward scale parameter determining the amount of entropy in an optimal policy. The critic is learned with the following loss function: LQ(ψ) = E(s,a,r,s′)∼D [ (Qψ(s, a)− y)2 ] (2) y = r(s, a) + γEa′∼π(s′) [ Qψ̄(s ′, a′)− log(πθ̄(a ′|s′)) α ] (3) where ψ̄ are the parameters of the target critic which is an exponential moving average of the past critics, updated as: ψ̄ ← (1− τ)ψ̄ + τψ, and τ is a hyperparameter that controls the update rate. Centralized Training with Decentralized Execution A number of works in deep multi-agent reinforcement learning have followed the paradigm of centralized training with decentralized execution (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). This paradigm allows for agents to train while sharing information (or incorporating information that is unavailable at test time) but act using only local information, without requiring communication which may be costly at execution time. Since most reinforcement learning applications use simulation for training, communication between agents during the training phase has a relatively lower cost. 4 INTRINSIC REWARD FUNCTIONS FOR MULTI-AGENT EXPLORATION In this section we present a set of intrinsic reward functions for exploration that incorporate information about what other agents have explored. These rewards assume that each agent (indexed by i) has a novelty function fi that determines how novel an observation is to it, based on its past experience. This function can be an inverse state visit count in discrete domains, or, in large/continuous domains, it can be represented by recent approaches for developing novelty-based intrinsic rewards in complex domains, such as random network distillation (Burda et al., 2019). Note that we assume that all agents share the same observation space so that each agent’s novelty function can operate on all other agents’ observations. In Table 1 we define the intrinsic rewards that we use in our experiments. INDEPENDENT rewards are analagous to single-agent approaches to exploration which define the intrinsic reward for an agent as the novelty of their new and own observation that occurs as a result of an action. The remainder of intrinsic reward functions that we consider use the novelty functions of other agents, in addition to their own, to further inform their exploration. MINIMUM rewards consider how novel all agents find a specific agent’s observation and rewards that agent based on the minimum of these novelties. This method leads to agents only being rewarded for exploring areas that no other agent has explored, which could be advantageous in scenarios where redundancy in exploration is not useful or even harmful. COVERING rewards agents for exploring areas that it considers more novel than the average agent. This reward results in agents shifting around the state space, only exploring regions as long as they are more novel to them than their average teammate. BURROWING rewards do the opposite, only rewarding agents for exploring areas that it considers less novel than the average agent. While seemingly counterintuitive, these rewards encourage agents to further explore areas they have already explored with the hope that they will discover new regions that few or no other agents have seen, which they will then consider less novel than average and continue to explore. As such, these rewards result in agents continuing to explore until they exhaust all possible intrinsic rewards from a given region (i.e. hit a dead end), somewhat akin to a depth-first search. LEADER-FOLLOWER uses burrowing rewards for the first agent, and covering rewards for the rest of the agents. This leads to an agent exploring a space thoroughly, and the rest of the agents following along and trying to cover that space. Note that these are not meant to be a comprehensive set of intrinsic reward functions applicable to all cooperative multi-agent tasks but rather a set of examples of how exploration can be centralized in order to take other agents into account. Our approach, described in the following sections, is agnostic to the type of intrinsic rewards used and, as such, can incorporate other reward types not described here, as long as they can be computed off-policy. 5 LEARNING POLICIES FOR MULTI-AGENT EXPLORATION For many tasks, it is impossible to know a priori which intrinsic rewards will be the most helpful one. Furthermore, the type of reward that is most helpful could change over the course of training if the task is sufficiently complex. In this section we present our approach for simultaneously learning policies trained with different types of intrinsic rewards and dynamically selecting the best one. Simultaneous Policy Learning In order to learn policies for various types of intrinsic rewards in parallel, we utilize a shared replay buffer and off-policy learning to maximize sample efficiency. In other words, we learn policies and value functions for all intrinsic reward types from all collected data, regardless of which policies it was collected by. This parallel learning is made possible by the fact that we can compute our novelty functions off-policy, given the observations for each agent after each environment transition, which are saved in a replay buffer. For each type of reward, we learn a different ”head” for our policies and critics. In other words, we learn a single network for each agent’s set of policies that shares early layers and branches out into different heads for each reward type. For critics, we learn a single network across all agents that shares early layers and branches out into separate heads for each agent and reward type. We learn separate heads for intrinsic and extrinsic rewards, as in Burda et al. (2019). We provide a diagram of our model architecture in Figure 1. We index agents by i ∈ {1 . . . n} and intrinsic reward types by j ∈ {1 . . .m} where m is the total number of intrinsic reward types that we are considering. The policy for agent i, trained using reward j (in addition to extrinsic rewards), is represented by πji . It takes as input agent i’s observation, oi, and outputs a distribution from which we can sample the action ai. The parameters of this policy are Θji = {θsharei , θ j i }, where θsharei is a shared base/input (for agent i) in a neural network and θ j i is a head/output specific to this reward type. The extrinsic critic for policy head πji is represented by Q ex i,j . It takes as input the global state s and the actions of all other agents a\i, and it outputs the expected returns under policy π j i for each possible action that agent i can take, given all other agents’ actions. The parameters of this critic are Ψexi,j = {ψshare, ψexi,j} where ψshare is a shared base across all agents and reward types. A critic with similar structure exists for predicting the intrinsic returns of actions taken by πji , represented by Qini,j , which uses the parameters: Ψ in i,j = {ψshare, ψini,j}. Note that the intrinsic critics share the same base parameters ψshare. We remove the symbols representing the parameters of the policies (Θ) and the critics (Ψ) for readability. In our notation we use the absence of a subscript or superscript to refer to a group. For example πj , refers to all agents’ policies trained on intrinsic reward j. We train our critics with the following loss function, adapted from soft actor-critic: LQ(Ψ) = E(s,o,a,r,s′,o′)∼D m∑ j=1 n∑ i=1 (Qexi,j(s,a)− yexi,j)2 + (Qini,j(s,a)− yini,j)2 (4) yexi,j = r ex(s,a) + γEa′∼π̄j(o′) [ Q̄exi,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (5) yini,j = r in i,j(o ′ i) + γEa′∼π̄j(o′) [ Q̄ini,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (6) where Q̄ refers to the target Q-function, an exponential weighted average of the past Q-functions, used for stability, and π̄ are similarly updated target policies. The intrinsic rewards laid out in Table 1 are represented as a function of the observations that results from the action taken, rini,j(o ′ i) where j specifies the type of reward. Importantly, we can calculate these loss functions for expected intrinsic and extrinsic returns for all policies given a single environment transition, allowing us to learn multiple policies for each agent in parallel. We train each policy head with the following gradient: ∇ΘjiJ(π j i ) = E(s,o)∼D,a∼πj [ ∇Θji log π j i (ai|oi) ( − log π j i (ai|oi) α +Aji (s,a) )] (7) Aji (s,a) = Q ex i,j(s,a) + βQ in i,j(s,a)− V j i (s) (8) V ji (s) = ∑ a′i∈Ai πji (a ′ i|oi)(Qexi,j(s, {a′i,a\i}) + βQini,j(s, {a′i,a\i})) (9) where β is a scalar that determines the weight of the intrinsic rewards, relative to extrinsic rewards, and Aji is a multi-agent advantage function (Foerster et al., 2018; Iqbal & Sha, 2019), used for helping with multi-agent credit assignment. Dynamic Policy Selection Now that we have established a method for simultaneously learning policies using different intrinsic reward types, we must devise a means of selecting between these policies. In order to select policies to use for environment rollouts, we must consider which policies maximize extrinsic returns, while taking into account the fact that there may still be ”unknown unknowns,” or regions that the agents have not seen yet where they may be able to further increase their extrinsic returns. As such, we must learn a meta-policy that, at the beginning of each episode, selects between the different sets of policies trained on different intrinsic rewards and maximizes extrinsic returns without collapsing to a single set of policies too early. We parameterized the selector policy Π with a vector, φ, that contains an entry for every reward type. The probability of sampling head j is: Π(j) ∝ exp(φ[j]). Unlike the action policies, this high-level policy does not take any inputs, a we simply want to learn which set of policies trained on the individual intrinsic reward functions has the highest expected extrinsic returns from the beginning of the episode. The most sensible metric for selecting policies is the expected extrinsic returns given by each policy head. We can use policy gradients to train the policy selector, Π, to maximize this value using the returns received when performing rollouts in the environment. We use the following gradient to train Π: ∇φJ(Π) = Eh∼Π [ ∇φ log Π(h) ( − log Π(h) η +Rexh − bΠ )] (10) Rexh = T∑ t=0 γtrex(st,at)|a ∼ πh(ot), bΠ = m∑ h′ Π(h′)µh′ (11) where µh is a running mean of the returns received by head h in the past, and η is a parameter similar to α for the low-level policies, which promotes entropy in the selector policy. Entropy in the policy selector is important in order to prevent it from collapsing onto a single exploration type that does well at first but does not continue to explore as effectively as others. As such, we can learn a diverse set of behaviors based on various multi-agent intrinsic reward functions and select the one that maximizes performance on the task at hand at any point during training, while continuing to consider other policies that may lead to greater rewards. 6 EXPERIMENTS We begin by describing our evaluation domains and then present experimental results which demonstrate the effectiveness of our approach. We provide additional details in the appendix and will share code for both the model and environments. We use a maximum of four agents in gridworld and two agents in VizDoom. We encode several tasks in both domains related to collecting the items (displayed in yellow in Figure 2) which each require different types of exploration: TASK 1 Agents must cooperatively collect all treasure on the map in order to complete the task; TASK 2 Agents must all collect the same treasure. The first agent to collect a treasure during an episode determines the goal for the rest of the agents. TASK 3 Agents must all collect the specific treasure that is assigned to them. The two agent version of each task uses agents 1-2 and treasure A-B, while the three agent versions use 1-3, A-C, and the four agent versions use 1-4, A-D. Agents receive a negative time penalty towards extrinsic rewards at each step, so they are motivated to complete the task as quickly as possible. The only positive extrinsic reward comes from any agent collecting a treasure that is allowed by the specific task, and rewards are shared between all agents. The optimal strategy in TASK 1 is for agents to spread out and explore separate portions of the map, while in TASK 2 they should explore the same areas, and in TASK 3 they should explore independently. 6.1 GRIDWORLD DOMAIN We first test our approach using a multi-agent gridworld domain (pictured in Fig. 2a), which allows us to design environments where the primary challenge lies in a combination of exploring the state space efficiently and coordinating behaviors. The environment includes two sources of stochasticity: random transitions and black holes. At each step there is a 10% chance of an agent’s action being replaced by a random one. Furthermore, there are several ”black holes” placed around the map which have a probability of opening at each time step. This probability changes at each step using a biased random walk such that it moves toward one, until the hole opens and it resets to zero. If an agent steps into a black hole when it is open, they will be sent back to their starting position. The spaces colored as black are holes that are currently open, while the gray spaces are holes that have the possibility of opening at the next step (the darker they are, the higher the probability). We set the rate of black holes dropping out to be higher in TASK 1 than the other 2 tasks, in order to balance the difficulty. The novelty function for each agent fi, which is used for calculating the intrinsic rewards in Table 1, is defined as 1 Nζ , where N is the number of times that the agent has visited its current cell and ζ is a decay rate selected as a hyperparameter (we find that ζ = 0.7 works well for our purposes). 6.2 VIZDOOM DOMAIN In order to test our method’s ability to scale to more complex environments with similarly challenging exploration tasks, we implement tasks analogous to those in our gridworld environment (i.e. extrinsic rewards are defined identically) in the VizDoom framework (Kempka et al., 2016). We use the ”My Way Home” map, which has been used as a test bed for single agent exploration techniques (Pathak et al., 2017), and modify it for multi-agent tasks (pictured in Figure 2b). Since the agents are moved to a central location closer to their rewards than in the original map, we lower the action repeat from 4 to 2, in order to force agents to take twice as many steps in order to explore the same areas, maintaining the challenging nature of exploration in the original task. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. We again find that ζ = 0.7 to work well in our experiments. 6.3 MAIN RESULTS Figure 3a demonstrates the results of our approach over the course of training on the 2 agent version of TASK 1 in gridworld, and the final results on each task/agent/domain combination can be found in Table 2. The full training curves for all settings can be found in the appendix (Section A.4). We train a team of agents using each of the multi-agent intrinsic reward functions defined in Table 1 individually, and then test our dynamic policy selection approach. We find that our approach is competitive with, or outperforms the best performing individual exploration method in nearly all tasks. This performance is exciting since our method receives no prior information about which type of exploration would work best, while each type carries its own inductive bias. Notably our learned policy selector learns to select the policies trained on intrinsic rewards that do well individually on the tasks. For instance, on TASK 1 with 2 agents, we find that our policy selector consistently selects BURROWING and MINIMUM rewards, the two best performing reward functions on that task. Furthermore, we find that our results on the more complex VizDoom domain mirror those in the gridworld, indicating that our methods are not limited to discrete domains, assuming that a reliable way for measuring the novelty of observations exists. Interestingly, our approach is sometimes able to significantly surpass the performance of the best individual reward function on TASK 3. This task requires agents to collect the specific reward assigned to them, so we expect independent exploration to be the most effective; however, exploration types that perform ”divide-and-conquer” type behavior such as BURROWING and MINIMUM have the potential to drastically speed up the exploration process if they happen to divide the space correctly, leading to a stark success-failure contrast in runs of these types. Since our method MULTI can select policies trained on these rewards, and otherwise fall back on INDEPENDENT policies if they are not working, we find that our method is able to surpass all individual reward types. We find that our approach is unable to match the performance of the best individual method on TASK 2 in some settings (gridworld with 3 agents and VizDoom). This lack of success may be an indication that these particular settings require commitment to a specific exploration strategy early on in training, highlighting a limitation of our approach. Our method requires testing out all policies until we find one that reaches high extrinsic rewards, which can dilute the effectiveness of exploration early on. 6.4 ANALYSIS Characteristics of Different Intrinsic Rewards In order to better understand how each reward type encourages agents to explore the state space, we visualize their exploration in videos, viewable at the anonymized link below.1. INDEPENDENT rewards, as expected, result in agents exploring the whole state space without taking other agents into consideration. As a result, on TASK 1, which 1https://sites.google.com/view/multi-exploration-iclr2020/home GRIDWORLD 1 2 0.14 ± 0.05 1.62 ± 0.59 0.13 ± 0.12 1.98 ± 0.06 0.18 ± 0.24 2.00 ± 0.00 3 1.16 ± 0.11 1.49 ± 0.76 0.00 ± 0.00 2.06 ± 1.05 0.34 ± 0.45 2.23 ± 0.73 4 0.84 ± 0.29 1.78 ± 0.44 0.00 ± 0.00 1.90 ± 0.49 1.17 ± 0.39 2.04 ± 0.61 2 2 2.00 ± 0.00 0.92 ± 0.10 1.11 ± 0.99 0.98 ± 0.05 1.73 ± 0.66 1.83 ± 0.41 3 2.66 ± 0.80 1.11 ± 0.29 0.54 ± 0.80 1.80 ± 0.29 3.00 ± 0.00 1.80 ± 0.71 4 1.83 ± 1.08 0.93 ± 0.13 0.22 ± 0.18 1.99 ± 0.67 2.66 ± 2.06 2.54 ± 1.21 3 2 1.39 ± 0.94 0.67 ± 1.03 0.29 ± 0.37 0.67 ± 1.03 0.83 ± 0.67 2.00 ± 0.00 3 1.68 ± 0.70 0.60 ± 0.73 0.09 ± 0.08 1.35 ± 1.16 1.59 ± 0.83 2.21 ± 0.91 4 1.12 ± 0.47 1.36 ± 0.71 0.05 ± 0.05 2.14 ± 1.49 0.68 ± 0.53 1.73 ± 0.47 VIZDOOM 1 2 0.94 ± 0.54 1.57 ± 0.74 0.16 ± 0.17 1.94 ± 0.10 0.61 ± 0.43 1.98 ± 0.03 2 1.52 ± 0.75 1.53 ± 0.74 0.70 ± 1.00 0.63 ± 0.04 1.93 ± 0.10 1.23 ± 0.65 3 0.18 ± 0.19 0.64 ± 1.05 0.45 ± 0.46 0.29 ± 0.25 0.20 ± 0.17 1.64 ± 0.63 requires coordination between agents to spread out and explore different areas, INDEPENDENT rewards struggle; however, on TASK 3, where agents receive individualized goals, independent exploration usually performs better, relative to the other methods. TASK 2 also requires coordination, but the rate of black holes dropping out in the gridworld version is lower on that task, making exploration easier. As a result, INDEPENDENT rewards perform well on TASK 2; however, we find that LEADER-FOLLOWER also performs well on this task, expecially when more agents are involved, indicating that these rewards do a good job of biasing agents toward exploring similar regions of the environment. MIMIMUM rewards prevent agents from exploring the same regions redundantly but can lead to situations where one of the agents is the first to explore all regions that provide sparse extrinsic rewards. In these cases, other agents are not aware of the extrinsic rewards and are also not motivated to explore for them since another agent has already done so. COVERING rewards, as expected, lead to behavior where agents are constantly switching up the regions that they explore. While this behavior does not prove to be useful in the tasks we test since the switching slows down overall exploration progress, it may be useful in scenarios where agents are required to spread out. Finally, BURROWING rewards cause agents to each explore different subregions and continue to explore those regions until they exhaust their options. This behavior is particularly effective on TASK 1, where agents are best served by spreading out and exploring the whole map in a mutually exclusive fashion. Ablations We compare to a baseline meta-policy which simply selects the action policies uniformly at random. We find that our approach is significantly superior to this baseline (see Figure 3b Multi (Uniform Meta-Policy)). Furthermore, we test a version of our method where all policies (with different random initializations) are trained on independent rewards (Multi (All Independent)). The purpose of this ablation is to test the degree to which the specific multi-agent intrinsic reward functions are helpful, as opposed to simply providing multiple options at each episode. Again, we find that our method outperforms the baseline, indicating that both aspects of our approach (diverse intrinsic reward functions which share information across agents, and a meta-policy selector that maximizes extrinsic rewards) are crucial for success in multi-agent exploration tasks. We perform two further ablations/comparisons. Results on task 1 with 2 agents in GridWorld are viewable in Figure 3b, and results on tasks 2 and 3 with 2 agents are viewable in the Appendix (A.5). In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. 7 CONCLUSION We propose a set of multi-agent intrinsic reward functions with differing properties, and compare them both qualitatively (through videos) and quantitatively on several multi-agent exploration tasks in both a gridworld domain as well as in VizDoom. Overall, we can see that cooperative multi-agent tasks can, in many cases, benefit from intrinsic rewards that take into account what other agents have explored, but there are various ways to incorporate that information, each with differing properties. As such, we propose a method for learning policies for all intrinsic reward types simultaneously while dynamically selecting the most effective ones. We show that our method is capable of matching or surpassing the performance of the best performing intrinsic reward type on various tasks while using the same number of samples collected from the environment. In future work we hope to introduce methods for directly learning the multi-agent intrinsic reward functions, rather than selecting from a set. A APPENDIX A.1 ENVIRONMENT DETAILS A.1.1 GRIDWORLD The black holes which send agents back to their starting positions if they are stepped into are an important aspect of the environment, as they add difficulty to exploration. The probability, ρ, of a black hole opening at each step, t, evolves as such: ρt+1 = ρt +N (µ, σ), where µ = σ = 0.05 for TASK 1 and µ = σ = 0.005 for 2 and 3. Agents observe their global position in (x, y) coordinates (scalars), as well as local information regarding walls in adjacent spaces, the probability of their adjacent spaces opening into a black hole, the relative position of other agents (if they are within 3 spaces), as well as information about which treasures the agent has already collected in the given episode. The global state is represented by the (x, y) coordinates of all agents, as one-hot encoded vectors for x and y separately, as well as the local information of all agents regarding black holes, walls, and treasures collected. Each agent’s action space consists of the 4 cardinal directions as well as an option to not move, which is helpful in cases where an agent is waiting for a black hole to be safe to cross. A.1.2 VIZDOOM Agents receive their egocentric view (Figure 2c) in the form of 48x48 grayscale images as observations along with an indicator of which agents (if any) have collected each reward, and we use a vector based global state which includes all agents’ (x, y) positions and velocities, their orientations, as well as the same indicator of which agent has collected each reward. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. There are 30 bins in the x dimension and 26 in the y dimension. (x, y) positions in the global state are represented both as scalars and one-hot vectors indicating which bin the agents are currently occupying. Each agent can choose from 3 actions at each time step: turn left, turn right, or go forward. A.2 TRAINING DETAILS The training procedure is detailed in Algorithm 1, and all hyperparameters are listed in Tables 3 and 4. Hyperparameters were selected by tuning one parameter at a time through intuition on task 1 with 2 agents and then applying to the rest of the settings with minimal changes. Where hyperparameters differ between settings, we make a footnote denoting them as such. Algorithm 1 Training Procedure for Multi-Explore w/ Soft Actor-Critic (Haarnoja et al., 2018) 1: Initialize environment with n agents 2: Initialize replay buffer, D 3: tupdate ← 0 4: tep ← max ep length 5: for t = 1 . . . total steps do 6: if episode done or tep == max ep length then 7: for j = 1 . . . num updates do 8: UPDATESELECTOR(R, h) . Eqs 10-11 in main text 9: end for 10: s,o← RESETENV 11: h ∼ Π . Sample policy head 12: tep ← 0 13: R← 0 14: end if 15: Select actions ai ∼ πhi (·|oi) for each agent, i 16: Send actions to environment and get s, o, r 17: R← R+ γtep r 18: Store transitions for all environments in D 19: tupdate+ = 1 20: tep+ = 1 21: if tupdate == steps per update then 22: for j = 1 . . . num updates do 23: Sample minibatch, B 24: UPDATECRITIC(B) . Eqs 4-6 in main text 25: UPDATEPOLICIES(B) . Eqs 7-9 in main text 26: Update target parameters: Ψ̄ = τΨ̄ + (1− τ)Ψ Θ̄ = τΘ̄ + (1− τ)Θ 27: end for 28: tupdate ← 0 29: end if 30: end for A.3 NETWORK ARCHITECTURES In this section we list, in pseudo-code, the architectures we used for all policies and critics A.3.1 GRIDWORLD θsharei (shared for policy heads): obs_size = observations.shape[1] fc1 = Linear(in_dim=obs_size, out_dim=128) nl1 = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] fc2 = Linear(in_dim=fc1.out_dim, out_dim=32) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_sim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=128) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=128) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.3.2 VIZDOOM θsharei (shared for policy heads belonging to one agent): # vector observation encoder vect_obs_size = vector_observations.shape[1] vect_fc = Linear(in_dim=obs_size, out_dim=32) vect_nl = ReLU() # image observation encoder img_obs_channels = image_observations.shape[1] pad1 = ReflectionPadding(size=1) conv1 = Conv2D(in_channels=img_obs_channels, out_channels=32, filter_size=3, stride=2) conv_nl1 = ReLU() pad2 = ReflectionPadding(size=1) conv2 = Conv2D(in_channels=conv1.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl2 = ReLU() pad3 = ReflectionPadding(size=1) conv3 = Conv2D(in_channels=conv2.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl3 = ReLU() pad4 = ReflectionPadding(size=1) conv4 = Conv2D(in_channels=conv3.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl4 = ReLU() conv_flatten = Flatten() # flatten output of conv layers conv_fc = Linear(in_dim=conv_flatten.out_dim, out_dim=128) conv_fc_nl = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] # takes concatenation of image and vector encodings as input fc_out1 = Linear(in_dim=conv_fc.out_dim + vect_fc.out_dim, out_dim=32) fc_out_nl = ReLU() fc_out2 = Linear(in_dim=fc_out1.out_dim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=256) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=256) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.4.1 GRIDWORLD A.4 TRAINING CURVES A.4.2 VIZDOOM A.5 MORE ABLATIONS In this section we consider two ablations/comparisons to our model across all three tasks in the 2 agent version of gridworld. In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method in any of the three tasks. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. A.6 ANALYZING META-POLICY In Figure 19 we analyze the behavior of the meta-policy in two separate runs. We evaluate on Task 3, since we find that our method is able to surpass the best individual reward function. This task assigns specific goals to each agent. As such, one might expect that independent exploration would work most effectively in this setting. While independent exploration is effective (see Figure 10), we find that our method can outperform it. In both runs, we find that burrowing rewards are selected when the agents finally learn how to solve the task; however, we find that burrowing rewards are not necessarily successful when deployed on their own. This lack of success is likely due to the fact that these rewards cause the agents to pick a region and commit to exploring it for the duration of training. As such, the agents may pick the ”wrong” region at first and never be able to recover. On the other hand, using our methods, the meta-policy can wait until the burrowing exploration regions align with the assigned rewards and then select the policies trained on these rewards. This usually ends up being more efficient than waiting for the agents to explore the whole map using independent rewards.
1. How effective and efficient is the proposed method in exploiting exploration via intrinsic rewards for multi-agent systems? 2. Are the evaluation tasks in the paper few and simple, and do they need to be more diverse and complex to generalize better? 3. Are the intrinsic reward types proposed in the paper motivated by the tasks in the paper or sufficient for tasks in general? 4. Would using a more recent novelty metric allow the method to work on more interesting and complex tasks? 5. Can the algorithm handle various goals for different agents in a multi-agent setting, or does it assume all agents share the same goal? 6. How does the method work in a non-centralized training form, especially when asking other agents about their preferences and novelty of states is not feasible? 7. What is the comprehensiveness of the intrinsic rewards used in this work, and how were they selected? 8. Can the policy selector design optimizing which intrinsic reward to toggle based on extrinsic rewards observed be further analyzed empirically? 9. Are there any standard tasks available that can replace task 2, which seems contrived? 10. Can the authors provide more explicit information regarding the origin of the rewards discussed before section 6.1? 11. Would including more tasks where the decision on intrinsic rewards is less obvious strengthen the paper's contribution? 12. Can the environment's "black holes" be explained further, and how do agents detect them? 13. Is the novel metric used count-based, and could something like ICM or RND provide a more robust evaluation? 14. Can the authors clarify why some numbers are bold in table 2 and include this information in the caption? 15. Are the resulting behaviors from the combination of two intrinsic rewards worth exploring further?
Review
Review Overall I like the approach in the paper. It proposes a nice 2 pronged method for exploiting exploration via intrinsic rewards for multi-agent systems. The parts that a bit lacking with the current version of the paper in this are the evaluation tasks are few and a bit simple and I think there needs to be more discussion on the "coverage" of the intrinsic reward types. Are the ones proposed motivated by the tasks in the paper or are they sufficient for tasks in general? Last using a more recent novelty metric could allow the method to work on more interesting/complex tasks. More detailed feedback: - It would be good to include more learning curves in the main text for the paper. - The fact that applying intrinsic motivation to multi-agent simulations seems like a natural idea would be to convert the problem to a "single" agent problem to compare against the "normal" application of intrinsic rewards. This might be another baseline to consider for comparison. - It says that all agents share the same replay buffer. Does this also imply that every agent is performing the same task there are just many agents? This does not make the problem very multi-agent with different goals. Would it affect the algorithm significantly to work on an environment where the agents have various types of goals? - As is noted in the text, this method appears to work well in the centralized training scheme that many have adopted recently. However, It makes me wonder if there is a way to employ these exploration schemes in a non-centralized training form. The ability to ask other agents in the world about there preferences and novelty of states appears to be a strong assumption, especially in a multi-agent robotics problem. - While the authors note that the intrinsic rewards used in this work are not comprehensive it would be good to note how comprehensive they are. Are there a few that were left out on purpose. Do the authours believe this set is sufficient. This statement makes it seem like the authors just tried a few options and found one that worked. It would be good to expand on this discussion more. - More detail for Figure 1 would be helpful to understand the overall network design. While that figure it helpful maybe it would be good to include a version that goes into detail for the 2 agent environment. Then a more compressed n agent version can also be shown. - The paper describes a policy selector that is a type of high-level policy for HRL. This design seems rather unique in that this part of the policy can optimizing for which intrinsic reward to toggle based on the extrinsic rewards observed. I like it. It is noted that entropy is important for this design. Can this be analyzed in an empirical way? Is this true for most environments/tasks? - Task 2 seems a bit contrived. Is there another instance of this type of task elsewhere in another paper? It would be better to use more standard tasks if they are available. - Before section 6.1 the paper is discussing rewards the are received. It would be good to more explicit about where these rewards are coming from. I think it is meant that these rewards are the extrinsic rewards but it does not say. - As noted just before section 6.1 it seems for the collection of tasks 1-3 it is already obvious what types of intrinsic rewards should be used. It would be good to include more tasks where this decision is less obvious. - Why are there "black holes" in the environment? Also if an agent steps into a black hole they are crushed never to be seen again. What you describe sounds more like a wormhole where one end is non-stationary... Also, can the agents detect the presence of a black hole in some way? - It appears the novel metric is count based. While this can work in practice it seems a rather simple metric. Is it possible to use something more like ICM or RND that was referenced in the paper? Especially for the VizDoom environment? - In table 2 where are some of the numbers bold? It would be good to include this information in the caption for the table. - I am not sure if the discussion on the behaviours the intrinsic reward functions result in are very surprising. Maybe there is a more interesting behaviour that results from the combination of two intrinsic rewards?
ICLR
Title Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning Abstract Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. N/A Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom (Kempka et al., 2016) platform. 1 INTRODUCTION Recent work in deep reinforcement learning effectively tackles challenging problems including the board game Go (Silver et al., 2016), Atari video games (Mnih et al., 2015), and simulated robotic continuous control (Lillicrap et al., 2016); however, these successful approaches often rely on frequent feedback indicating whether the learning agent is performing well, otherwise known as dense rewards. In many tasks, dense rewards can be difficult to specify without inducing locally optimal but globally sub-optimal behavior. As such, it is frequently desirable to specify only a sparse reward that simply signals whether an agent has attained success or failure on a given task. Despite their desirability, sparse rewards introduce their own set of challenges. When rewards are sparse, determining which of an agent’s actions led to a reward becomes more difficult, a phenomenon known in reinforcement learning as the credit-assignment problem. Furthermore, if rewards cannot be obtained by random actions, an agent will never receive a signal through which it can begin learning. As such, researchers have devised methods which attempt to provide agents with additional reward signals, known as intrinsic rewards, through which they can learn meaningful behavior (Oudeyer & Kaplan, 2009). A large subset of these works focus on learning intrinsic rewards that encourage exploration of the state space (Pathak et al., 2017; Houthooft et al., 2016; Burda et al., 2019; Ostrovski et al., 2017; Tang et al., 2017). Exploring the state space provides a useful inductive bias for many sparse reward problems where the challenge lies in ”finding” rewards that may only be obtained in parts of the state space that are hard to reach by random exploration. These exploration-focused approaches frequently formulate their intrinsic rewards to measure the ”novelty” of a state, such that agents are rewarded for taking actions that lead to novel states. Our work approaches the question of how to apply novelty-based intrinsic motivation in the cooperative multi-agent setting. Directly applying novelty-based intrinsic motivation to the multi-agent setting results in agents each exploring their shared state space independently from one another. In many cases, independent exploration may not be the most efficient method. For example, consider a task where multiple agents are placed in a maze and their goal is to collectively reach all of the landmarks that are spread out through the maze. It would be inefficient for the agents to explore the same areas redundantly. Instead, it would be much more sensible for agents to ”divide-and-conquer,” or avoid redundant exploration. Thus, an ideal intrinsic reward for this task would encourage such behavior; however, the same behavior would not be ideal for other tasks. For example, take the same maze but change the task such that all agents need to reach the same landmark. Divide-and-conquer would no longer be an optimal exploration strategy since agents only need to find one landmark and they all need to reach the same one. Cooperative multi-agent reinforcement learning can benefit from sharing information about exploration across agents; however, the question of what to do with that shared information depends on the task at hand. In order to improve exploration in cooperative multi-agent reinforcement learning, we must first identify what kinds inductive biases can potentially be useful for multi-agent tasks and then devise intrinsic reward functions that incorporate those biases. Then, we must find a way to allow our agents to adapt their exploration to the given task, rather than committing to one type of intrinsic reward function. In this work, we first introduce a candidate set of intrinsic rewards for multiagent exploration which hold differing properties with regards to how they explore the state space. Subsequently, we present a hierarchical method for simultaneously learning policies trained on different intrinsic rewards and selecting the policies which maximize extrinsic returns. Importantly, all policies are trained using a shared replay buffer, drastically improving the sample efficiency and effectiveness of learning in cooperative multi-agent tasks with sparse rewards. 2 RELATED WORK Single-Agent Exploration In order to solve sparse reward problems, researchers have long worked on improving exploration in reinforcement learning. To achieve these means, prior works commonly propose reward bonuses that encourage agents to reach novel states. In tabular domains, reward bonuses based on the inverse state-action count have been shown to be effective in speeding up learning (Strehl & Littman, 2008). In order to scale count-based approaches to large state spaces, many recent works have focused on devising pseudo state counts to use as reward bonuses (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017). Alternatively, some work has focused on defining intrinsic rewards for exploration based on inspiration from psychology (Oudeyer & Kaplan, 2009; Schmidhuber, 2010). These works use various measures of novelty as intrinsic rewards including: transition dynamics prediction error (Pathak et al., 2017), information gain with respect to a learned dynamics model (Houthooft et al., 2016), and random state embedding network distillation error (Burda et al., 2019). Multi-Agent Reinforcement Learning (MARL) Multi-agent reinforcement learning introduces several unique challenges that recent work has attempted to address. These challenges include: multi-agent credit assignment in cooperative tasks with shared rewards (Sunehag et al., 2018; Rashid et al., 2018; Foerster et al., 2018), non-stationarity of the environment in the presence of other learning agents (Lowe et al., 2017; Foerster et al., 2018; Iqbal & Sha, 2019), and learning of communication protocols between cooperative agents (Foerster et al., 2016; Sukhbaatar et al., 2016; Jiang & Lu, 2018). Exploration in MARL While the fields of exploration in RL and multi-agent RL are popular, relatively little work has been done at the intersection of both. Carmel & Markovitch (1997) consider exploration with respect to opponent strategies in competitive games, and Verbeeck et al. (2005) consider exploration of a large joint action space in a load balancing problem. Jaques et al. (2018) define an intrinsic reward function for multi-agent reinforcement learning that encourages agents to take actions which have the biggest effect on other agents’ behavior, otherwise referred to as ”social influence.” Agogino & Tumer (2008) Define metrics for evaluating the efficacy of reward functions in multi-agent domains. These works, while important, do not address the problem of exploring a large state space, and whether this exploration can be improved in multi-agent systems. A recent approach to collaborative evolutionary reinforcement learning (Khadka et al., 2019) shares some similarities with our approach. As in our work, the authors devise a method for learning a population of diverse policies with a shared replay buffer and dynamically selecting the best learner; however, their work is focused on single-agent tasks and does not incorporate any notion of intrinsic rewards. As such, this work is not applicable to sparse reward problems in MARL. 3 BACKGROUND Dec-POMDPs In this work, we consider the setting of decentralized POMDPs (Oliehoek et al., 2016), which are used to describe cooperative multi-agent tasks. A decentralized POMDP (DecPOMDP) is defined by a tuple: (S,A,T ,O,O ,R, n, γ). In this setting we have n total agents. S is the set of global states in the environment, while O = ⊗i∈{1...n}Oi is the set of joint observations for each agent and A = ⊗i∈{1...n}Ai is the set of possible joint actions for each agent. A specific joint action at one time step is denoted as a = {a1, . . . , an} ∈ A and a joint observation is o = {o1, . . . , on} ∈ O. T is the state transition function which defines the probability P (s′|s,a), and O is the observation function which defines the probability P (o|a, s′). R is the reward function which maps the combination of state and joint actions to a single scalar reward. Importantly, this reward is shared between all agents, so Dec-POMDPs always describe cooperative problems. Finally, γ is the discount factor which determines how much the agents should favor immediate reward over long-term gain. Soft Actor-Critic Our approach uses Soft Actor-Critic (SAC) (Haarnoja et al., 2018) as its underlying algorithm. SAC incorporates an entropy term in the loss functions for both the actor and critic, in order to encourage exploration and prevent premature convergence to a sub-optimal deterministic policy. The policy gradient with an entropy term is computed as follows: ∇θJ(πθ) = Es∼D,a∼π [ ∇θ log πθ(a|s) ( − log πθ(a|s) α +Qψ(s, a)− b(s) )] (1) where D is a replay buffer that stores past environment transitions, ψ are the parameters of the learned critic, b(s) is a state dependent baseline (e.g. the state value function V (s)), and α is a reward scale parameter determining the amount of entropy in an optimal policy. The critic is learned with the following loss function: LQ(ψ) = E(s,a,r,s′)∼D [ (Qψ(s, a)− y)2 ] (2) y = r(s, a) + γEa′∼π(s′) [ Qψ̄(s ′, a′)− log(πθ̄(a ′|s′)) α ] (3) where ψ̄ are the parameters of the target critic which is an exponential moving average of the past critics, updated as: ψ̄ ← (1− τ)ψ̄ + τψ, and τ is a hyperparameter that controls the update rate. Centralized Training with Decentralized Execution A number of works in deep multi-agent reinforcement learning have followed the paradigm of centralized training with decentralized execution (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). This paradigm allows for agents to train while sharing information (or incorporating information that is unavailable at test time) but act using only local information, without requiring communication which may be costly at execution time. Since most reinforcement learning applications use simulation for training, communication between agents during the training phase has a relatively lower cost. 4 INTRINSIC REWARD FUNCTIONS FOR MULTI-AGENT EXPLORATION In this section we present a set of intrinsic reward functions for exploration that incorporate information about what other agents have explored. These rewards assume that each agent (indexed by i) has a novelty function fi that determines how novel an observation is to it, based on its past experience. This function can be an inverse state visit count in discrete domains, or, in large/continuous domains, it can be represented by recent approaches for developing novelty-based intrinsic rewards in complex domains, such as random network distillation (Burda et al., 2019). Note that we assume that all agents share the same observation space so that each agent’s novelty function can operate on all other agents’ observations. In Table 1 we define the intrinsic rewards that we use in our experiments. INDEPENDENT rewards are analagous to single-agent approaches to exploration which define the intrinsic reward for an agent as the novelty of their new and own observation that occurs as a result of an action. The remainder of intrinsic reward functions that we consider use the novelty functions of other agents, in addition to their own, to further inform their exploration. MINIMUM rewards consider how novel all agents find a specific agent’s observation and rewards that agent based on the minimum of these novelties. This method leads to agents only being rewarded for exploring areas that no other agent has explored, which could be advantageous in scenarios where redundancy in exploration is not useful or even harmful. COVERING rewards agents for exploring areas that it considers more novel than the average agent. This reward results in agents shifting around the state space, only exploring regions as long as they are more novel to them than their average teammate. BURROWING rewards do the opposite, only rewarding agents for exploring areas that it considers less novel than the average agent. While seemingly counterintuitive, these rewards encourage agents to further explore areas they have already explored with the hope that they will discover new regions that few or no other agents have seen, which they will then consider less novel than average and continue to explore. As such, these rewards result in agents continuing to explore until they exhaust all possible intrinsic rewards from a given region (i.e. hit a dead end), somewhat akin to a depth-first search. LEADER-FOLLOWER uses burrowing rewards for the first agent, and covering rewards for the rest of the agents. This leads to an agent exploring a space thoroughly, and the rest of the agents following along and trying to cover that space. Note that these are not meant to be a comprehensive set of intrinsic reward functions applicable to all cooperative multi-agent tasks but rather a set of examples of how exploration can be centralized in order to take other agents into account. Our approach, described in the following sections, is agnostic to the type of intrinsic rewards used and, as such, can incorporate other reward types not described here, as long as they can be computed off-policy. 5 LEARNING POLICIES FOR MULTI-AGENT EXPLORATION For many tasks, it is impossible to know a priori which intrinsic rewards will be the most helpful one. Furthermore, the type of reward that is most helpful could change over the course of training if the task is sufficiently complex. In this section we present our approach for simultaneously learning policies trained with different types of intrinsic rewards and dynamically selecting the best one. Simultaneous Policy Learning In order to learn policies for various types of intrinsic rewards in parallel, we utilize a shared replay buffer and off-policy learning to maximize sample efficiency. In other words, we learn policies and value functions for all intrinsic reward types from all collected data, regardless of which policies it was collected by. This parallel learning is made possible by the fact that we can compute our novelty functions off-policy, given the observations for each agent after each environment transition, which are saved in a replay buffer. For each type of reward, we learn a different ”head” for our policies and critics. In other words, we learn a single network for each agent’s set of policies that shares early layers and branches out into different heads for each reward type. For critics, we learn a single network across all agents that shares early layers and branches out into separate heads for each agent and reward type. We learn separate heads for intrinsic and extrinsic rewards, as in Burda et al. (2019). We provide a diagram of our model architecture in Figure 1. We index agents by i ∈ {1 . . . n} and intrinsic reward types by j ∈ {1 . . .m} where m is the total number of intrinsic reward types that we are considering. The policy for agent i, trained using reward j (in addition to extrinsic rewards), is represented by πji . It takes as input agent i’s observation, oi, and outputs a distribution from which we can sample the action ai. The parameters of this policy are Θji = {θsharei , θ j i }, where θsharei is a shared base/input (for agent i) in a neural network and θ j i is a head/output specific to this reward type. The extrinsic critic for policy head πji is represented by Q ex i,j . It takes as input the global state s and the actions of all other agents a\i, and it outputs the expected returns under policy π j i for each possible action that agent i can take, given all other agents’ actions. The parameters of this critic are Ψexi,j = {ψshare, ψexi,j} where ψshare is a shared base across all agents and reward types. A critic with similar structure exists for predicting the intrinsic returns of actions taken by πji , represented by Qini,j , which uses the parameters: Ψ in i,j = {ψshare, ψini,j}. Note that the intrinsic critics share the same base parameters ψshare. We remove the symbols representing the parameters of the policies (Θ) and the critics (Ψ) for readability. In our notation we use the absence of a subscript or superscript to refer to a group. For example πj , refers to all agents’ policies trained on intrinsic reward j. We train our critics with the following loss function, adapted from soft actor-critic: LQ(Ψ) = E(s,o,a,r,s′,o′)∼D m∑ j=1 n∑ i=1 (Qexi,j(s,a)− yexi,j)2 + (Qini,j(s,a)− yini,j)2 (4) yexi,j = r ex(s,a) + γEa′∼π̄j(o′) [ Q̄exi,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (5) yini,j = r in i,j(o ′ i) + γEa′∼π̄j(o′) [ Q̄ini,j(s ′,a′)− log(π̄ j i (a ′ i|o′i)) α ] (6) where Q̄ refers to the target Q-function, an exponential weighted average of the past Q-functions, used for stability, and π̄ are similarly updated target policies. The intrinsic rewards laid out in Table 1 are represented as a function of the observations that results from the action taken, rini,j(o ′ i) where j specifies the type of reward. Importantly, we can calculate these loss functions for expected intrinsic and extrinsic returns for all policies given a single environment transition, allowing us to learn multiple policies for each agent in parallel. We train each policy head with the following gradient: ∇ΘjiJ(π j i ) = E(s,o)∼D,a∼πj [ ∇Θji log π j i (ai|oi) ( − log π j i (ai|oi) α +Aji (s,a) )] (7) Aji (s,a) = Q ex i,j(s,a) + βQ in i,j(s,a)− V j i (s) (8) V ji (s) = ∑ a′i∈Ai πji (a ′ i|oi)(Qexi,j(s, {a′i,a\i}) + βQini,j(s, {a′i,a\i})) (9) where β is a scalar that determines the weight of the intrinsic rewards, relative to extrinsic rewards, and Aji is a multi-agent advantage function (Foerster et al., 2018; Iqbal & Sha, 2019), used for helping with multi-agent credit assignment. Dynamic Policy Selection Now that we have established a method for simultaneously learning policies using different intrinsic reward types, we must devise a means of selecting between these policies. In order to select policies to use for environment rollouts, we must consider which policies maximize extrinsic returns, while taking into account the fact that there may still be ”unknown unknowns,” or regions that the agents have not seen yet where they may be able to further increase their extrinsic returns. As such, we must learn a meta-policy that, at the beginning of each episode, selects between the different sets of policies trained on different intrinsic rewards and maximizes extrinsic returns without collapsing to a single set of policies too early. We parameterized the selector policy Π with a vector, φ, that contains an entry for every reward type. The probability of sampling head j is: Π(j) ∝ exp(φ[j]). Unlike the action policies, this high-level policy does not take any inputs, a we simply want to learn which set of policies trained on the individual intrinsic reward functions has the highest expected extrinsic returns from the beginning of the episode. The most sensible metric for selecting policies is the expected extrinsic returns given by each policy head. We can use policy gradients to train the policy selector, Π, to maximize this value using the returns received when performing rollouts in the environment. We use the following gradient to train Π: ∇φJ(Π) = Eh∼Π [ ∇φ log Π(h) ( − log Π(h) η +Rexh − bΠ )] (10) Rexh = T∑ t=0 γtrex(st,at)|a ∼ πh(ot), bΠ = m∑ h′ Π(h′)µh′ (11) where µh is a running mean of the returns received by head h in the past, and η is a parameter similar to α for the low-level policies, which promotes entropy in the selector policy. Entropy in the policy selector is important in order to prevent it from collapsing onto a single exploration type that does well at first but does not continue to explore as effectively as others. As such, we can learn a diverse set of behaviors based on various multi-agent intrinsic reward functions and select the one that maximizes performance on the task at hand at any point during training, while continuing to consider other policies that may lead to greater rewards. 6 EXPERIMENTS We begin by describing our evaluation domains and then present experimental results which demonstrate the effectiveness of our approach. We provide additional details in the appendix and will share code for both the model and environments. We use a maximum of four agents in gridworld and two agents in VizDoom. We encode several tasks in both domains related to collecting the items (displayed in yellow in Figure 2) which each require different types of exploration: TASK 1 Agents must cooperatively collect all treasure on the map in order to complete the task; TASK 2 Agents must all collect the same treasure. The first agent to collect a treasure during an episode determines the goal for the rest of the agents. TASK 3 Agents must all collect the specific treasure that is assigned to them. The two agent version of each task uses agents 1-2 and treasure A-B, while the three agent versions use 1-3, A-C, and the four agent versions use 1-4, A-D. Agents receive a negative time penalty towards extrinsic rewards at each step, so they are motivated to complete the task as quickly as possible. The only positive extrinsic reward comes from any agent collecting a treasure that is allowed by the specific task, and rewards are shared between all agents. The optimal strategy in TASK 1 is for agents to spread out and explore separate portions of the map, while in TASK 2 they should explore the same areas, and in TASK 3 they should explore independently. 6.1 GRIDWORLD DOMAIN We first test our approach using a multi-agent gridworld domain (pictured in Fig. 2a), which allows us to design environments where the primary challenge lies in a combination of exploring the state space efficiently and coordinating behaviors. The environment includes two sources of stochasticity: random transitions and black holes. At each step there is a 10% chance of an agent’s action being replaced by a random one. Furthermore, there are several ”black holes” placed around the map which have a probability of opening at each time step. This probability changes at each step using a biased random walk such that it moves toward one, until the hole opens and it resets to zero. If an agent steps into a black hole when it is open, they will be sent back to their starting position. The spaces colored as black are holes that are currently open, while the gray spaces are holes that have the possibility of opening at the next step (the darker they are, the higher the probability). We set the rate of black holes dropping out to be higher in TASK 1 than the other 2 tasks, in order to balance the difficulty. The novelty function for each agent fi, which is used for calculating the intrinsic rewards in Table 1, is defined as 1 Nζ , where N is the number of times that the agent has visited its current cell and ζ is a decay rate selected as a hyperparameter (we find that ζ = 0.7 works well for our purposes). 6.2 VIZDOOM DOMAIN In order to test our method’s ability to scale to more complex environments with similarly challenging exploration tasks, we implement tasks analogous to those in our gridworld environment (i.e. extrinsic rewards are defined identically) in the VizDoom framework (Kempka et al., 2016). We use the ”My Way Home” map, which has been used as a test bed for single agent exploration techniques (Pathak et al., 2017), and modify it for multi-agent tasks (pictured in Figure 2b). Since the agents are moved to a central location closer to their rewards than in the original map, we lower the action repeat from 4 to 2, in order to force agents to take twice as many steps in order to explore the same areas, maintaining the challenging nature of exploration in the original task. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. We again find that ζ = 0.7 to work well in our experiments. 6.3 MAIN RESULTS Figure 3a demonstrates the results of our approach over the course of training on the 2 agent version of TASK 1 in gridworld, and the final results on each task/agent/domain combination can be found in Table 2. The full training curves for all settings can be found in the appendix (Section A.4). We train a team of agents using each of the multi-agent intrinsic reward functions defined in Table 1 individually, and then test our dynamic policy selection approach. We find that our approach is competitive with, or outperforms the best performing individual exploration method in nearly all tasks. This performance is exciting since our method receives no prior information about which type of exploration would work best, while each type carries its own inductive bias. Notably our learned policy selector learns to select the policies trained on intrinsic rewards that do well individually on the tasks. For instance, on TASK 1 with 2 agents, we find that our policy selector consistently selects BURROWING and MINIMUM rewards, the two best performing reward functions on that task. Furthermore, we find that our results on the more complex VizDoom domain mirror those in the gridworld, indicating that our methods are not limited to discrete domains, assuming that a reliable way for measuring the novelty of observations exists. Interestingly, our approach is sometimes able to significantly surpass the performance of the best individual reward function on TASK 3. This task requires agents to collect the specific reward assigned to them, so we expect independent exploration to be the most effective; however, exploration types that perform ”divide-and-conquer” type behavior such as BURROWING and MINIMUM have the potential to drastically speed up the exploration process if they happen to divide the space correctly, leading to a stark success-failure contrast in runs of these types. Since our method MULTI can select policies trained on these rewards, and otherwise fall back on INDEPENDENT policies if they are not working, we find that our method is able to surpass all individual reward types. We find that our approach is unable to match the performance of the best individual method on TASK 2 in some settings (gridworld with 3 agents and VizDoom). This lack of success may be an indication that these particular settings require commitment to a specific exploration strategy early on in training, highlighting a limitation of our approach. Our method requires testing out all policies until we find one that reaches high extrinsic rewards, which can dilute the effectiveness of exploration early on. 6.4 ANALYSIS Characteristics of Different Intrinsic Rewards In order to better understand how each reward type encourages agents to explore the state space, we visualize their exploration in videos, viewable at the anonymized link below.1. INDEPENDENT rewards, as expected, result in agents exploring the whole state space without taking other agents into consideration. As a result, on TASK 1, which 1https://sites.google.com/view/multi-exploration-iclr2020/home GRIDWORLD 1 2 0.14 ± 0.05 1.62 ± 0.59 0.13 ± 0.12 1.98 ± 0.06 0.18 ± 0.24 2.00 ± 0.00 3 1.16 ± 0.11 1.49 ± 0.76 0.00 ± 0.00 2.06 ± 1.05 0.34 ± 0.45 2.23 ± 0.73 4 0.84 ± 0.29 1.78 ± 0.44 0.00 ± 0.00 1.90 ± 0.49 1.17 ± 0.39 2.04 ± 0.61 2 2 2.00 ± 0.00 0.92 ± 0.10 1.11 ± 0.99 0.98 ± 0.05 1.73 ± 0.66 1.83 ± 0.41 3 2.66 ± 0.80 1.11 ± 0.29 0.54 ± 0.80 1.80 ± 0.29 3.00 ± 0.00 1.80 ± 0.71 4 1.83 ± 1.08 0.93 ± 0.13 0.22 ± 0.18 1.99 ± 0.67 2.66 ± 2.06 2.54 ± 1.21 3 2 1.39 ± 0.94 0.67 ± 1.03 0.29 ± 0.37 0.67 ± 1.03 0.83 ± 0.67 2.00 ± 0.00 3 1.68 ± 0.70 0.60 ± 0.73 0.09 ± 0.08 1.35 ± 1.16 1.59 ± 0.83 2.21 ± 0.91 4 1.12 ± 0.47 1.36 ± 0.71 0.05 ± 0.05 2.14 ± 1.49 0.68 ± 0.53 1.73 ± 0.47 VIZDOOM 1 2 0.94 ± 0.54 1.57 ± 0.74 0.16 ± 0.17 1.94 ± 0.10 0.61 ± 0.43 1.98 ± 0.03 2 1.52 ± 0.75 1.53 ± 0.74 0.70 ± 1.00 0.63 ± 0.04 1.93 ± 0.10 1.23 ± 0.65 3 0.18 ± 0.19 0.64 ± 1.05 0.45 ± 0.46 0.29 ± 0.25 0.20 ± 0.17 1.64 ± 0.63 requires coordination between agents to spread out and explore different areas, INDEPENDENT rewards struggle; however, on TASK 3, where agents receive individualized goals, independent exploration usually performs better, relative to the other methods. TASK 2 also requires coordination, but the rate of black holes dropping out in the gridworld version is lower on that task, making exploration easier. As a result, INDEPENDENT rewards perform well on TASK 2; however, we find that LEADER-FOLLOWER also performs well on this task, expecially when more agents are involved, indicating that these rewards do a good job of biasing agents toward exploring similar regions of the environment. MIMIMUM rewards prevent agents from exploring the same regions redundantly but can lead to situations where one of the agents is the first to explore all regions that provide sparse extrinsic rewards. In these cases, other agents are not aware of the extrinsic rewards and are also not motivated to explore for them since another agent has already done so. COVERING rewards, as expected, lead to behavior where agents are constantly switching up the regions that they explore. While this behavior does not prove to be useful in the tasks we test since the switching slows down overall exploration progress, it may be useful in scenarios where agents are required to spread out. Finally, BURROWING rewards cause agents to each explore different subregions and continue to explore those regions until they exhaust their options. This behavior is particularly effective on TASK 1, where agents are best served by spreading out and exploring the whole map in a mutually exclusive fashion. Ablations We compare to a baseline meta-policy which simply selects the action policies uniformly at random. We find that our approach is significantly superior to this baseline (see Figure 3b Multi (Uniform Meta-Policy)). Furthermore, we test a version of our method where all policies (with different random initializations) are trained on independent rewards (Multi (All Independent)). The purpose of this ablation is to test the degree to which the specific multi-agent intrinsic reward functions are helpful, as opposed to simply providing multiple options at each episode. Again, we find that our method outperforms the baseline, indicating that both aspects of our approach (diverse intrinsic reward functions which share information across agents, and a meta-policy selector that maximizes extrinsic rewards) are crucial for success in multi-agent exploration tasks. We perform two further ablations/comparisons. Results on task 1 with 2 agents in GridWorld are viewable in Figure 3b, and results on tasks 2 and 3 with 2 agents are viewable in the Appendix (A.5). In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. 7 CONCLUSION We propose a set of multi-agent intrinsic reward functions with differing properties, and compare them both qualitatively (through videos) and quantitatively on several multi-agent exploration tasks in both a gridworld domain as well as in VizDoom. Overall, we can see that cooperative multi-agent tasks can, in many cases, benefit from intrinsic rewards that take into account what other agents have explored, but there are various ways to incorporate that information, each with differing properties. As such, we propose a method for learning policies for all intrinsic reward types simultaneously while dynamically selecting the most effective ones. We show that our method is capable of matching or surpassing the performance of the best performing intrinsic reward type on various tasks while using the same number of samples collected from the environment. In future work we hope to introduce methods for directly learning the multi-agent intrinsic reward functions, rather than selecting from a set. A APPENDIX A.1 ENVIRONMENT DETAILS A.1.1 GRIDWORLD The black holes which send agents back to their starting positions if they are stepped into are an important aspect of the environment, as they add difficulty to exploration. The probability, ρ, of a black hole opening at each step, t, evolves as such: ρt+1 = ρt +N (µ, σ), where µ = σ = 0.05 for TASK 1 and µ = σ = 0.005 for 2 and 3. Agents observe their global position in (x, y) coordinates (scalars), as well as local information regarding walls in adjacent spaces, the probability of their adjacent spaces opening into a black hole, the relative position of other agents (if they are within 3 spaces), as well as information about which treasures the agent has already collected in the given episode. The global state is represented by the (x, y) coordinates of all agents, as one-hot encoded vectors for x and y separately, as well as the local information of all agents regarding black holes, walls, and treasures collected. Each agent’s action space consists of the 4 cardinal directions as well as an option to not move, which is helpful in cases where an agent is waiting for a black hole to be safe to cross. A.1.2 VIZDOOM Agents receive their egocentric view (Figure 2c) in the form of 48x48 grayscale images as observations along with an indicator of which agents (if any) have collected each reward, and we use a vector based global state which includes all agents’ (x, y) positions and velocities, their orientations, as well as the same indicator of which agent has collected each reward. As in the gridworld setting, we use count-based intrinsic rewards for VizDoom; however, since VizDoom is not a discrete domain, we separate agents’ (x, y) positions into discrete bins and use the counts for these bins. There are 30 bins in the x dimension and 26 in the y dimension. (x, y) positions in the global state are represented both as scalars and one-hot vectors indicating which bin the agents are currently occupying. Each agent can choose from 3 actions at each time step: turn left, turn right, or go forward. A.2 TRAINING DETAILS The training procedure is detailed in Algorithm 1, and all hyperparameters are listed in Tables 3 and 4. Hyperparameters were selected by tuning one parameter at a time through intuition on task 1 with 2 agents and then applying to the rest of the settings with minimal changes. Where hyperparameters differ between settings, we make a footnote denoting them as such. Algorithm 1 Training Procedure for Multi-Explore w/ Soft Actor-Critic (Haarnoja et al., 2018) 1: Initialize environment with n agents 2: Initialize replay buffer, D 3: tupdate ← 0 4: tep ← max ep length 5: for t = 1 . . . total steps do 6: if episode done or tep == max ep length then 7: for j = 1 . . . num updates do 8: UPDATESELECTOR(R, h) . Eqs 10-11 in main text 9: end for 10: s,o← RESETENV 11: h ∼ Π . Sample policy head 12: tep ← 0 13: R← 0 14: end if 15: Select actions ai ∼ πhi (·|oi) for each agent, i 16: Send actions to environment and get s, o, r 17: R← R+ γtep r 18: Store transitions for all environments in D 19: tupdate+ = 1 20: tep+ = 1 21: if tupdate == steps per update then 22: for j = 1 . . . num updates do 23: Sample minibatch, B 24: UPDATECRITIC(B) . Eqs 4-6 in main text 25: UPDATEPOLICIES(B) . Eqs 7-9 in main text 26: Update target parameters: Ψ̄ = τΨ̄ + (1− τ)Ψ Θ̄ = τΘ̄ + (1− τ)Θ 27: end for 28: tupdate ← 0 29: end if 30: end for A.3 NETWORK ARCHITECTURES In this section we list, in pseudo-code, the architectures we used for all policies and critics A.3.1 GRIDWORLD θsharei (shared for policy heads): obs_size = observations.shape[1] fc1 = Linear(in_dim=obs_size, out_dim=128) nl1 = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] fc2 = Linear(in_dim=fc1.out_dim, out_dim=32) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_sim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=128) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=128) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.3.2 VIZDOOM θsharei (shared for policy heads belonging to one agent): # vector observation encoder vect_obs_size = vector_observations.shape[1] vect_fc = Linear(in_dim=obs_size, out_dim=32) vect_nl = ReLU() # image observation encoder img_obs_channels = image_observations.shape[1] pad1 = ReflectionPadding(size=1) conv1 = Conv2D(in_channels=img_obs_channels, out_channels=32, filter_size=3, stride=2) conv_nl1 = ReLU() pad2 = ReflectionPadding(size=1) conv2 = Conv2D(in_channels=conv1.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl2 = ReLU() pad3 = ReflectionPadding(size=1) conv3 = Conv2D(in_channels=conv2.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl3 = ReLU() pad4 = ReflectionPadding(size=1) conv4 = Conv2D(in_channels=conv3.out_channels, out_channels=32, filter_size=3, stride=2) conv_nl4 = ReLU() conv_flatten = Flatten() # flatten output of conv layers conv_fc = Linear(in_dim=conv_flatten.out_dim, out_dim=128) conv_fc_nl = ReLU() θji (specific to each policy head): n_acs = actions.shape[1] # takes concatenation of image and vector encodings as input fc_out1 = Linear(in_dim=conv_fc.out_dim + vect_fc.out_dim, out_dim=32) fc_out_nl = ReLU() fc_out2 = Linear(in_dim=fc_out1.out_dim, out_dim=n_acs) ψshare (shared across critics for all agents and reward types): state_size = states.shape[1] fc1 = Linear(in_dim=state_size, out_dim=256) nl1 = ReLU() ψi,j (specific to each agent/policy head combination, same architecture for extrinsic and intrinsic critics): n_acs = actions.shape[1] # fc2 takes other agents’ actions as input fc2 = Linear(in_dim=fc1.out_dim + (num_agents - 1) * n_acs, out_dim=256) nl2 = ReLU() fc3 = Linear(in_dim=fc2.out_dim, out_dim=n_acs) A.4.1 GRIDWORLD A.4 TRAINING CURVES A.4.2 VIZDOOM A.5 MORE ABLATIONS In this section we consider two ablations/comparisons to our model across all three tasks in the 2 agent version of gridworld. In the first (Centralized) we compute intrinsic rewards under the assumption that all agents are treated as one agent. In other words, we use the inverse count of the number of times that all agents have jointly taken up their combined positions, rather than considering agents independently. While this reward function will ensure that the global state space is thoroughly searched, it lacks the inductive biases toward spatial coordination that our reward functions incorporate. As such, it does not learn as efficiently as our method in any of the three tasks. In the second (Multi (No Entropy)) we remove the entropy term from the head selector loss function in order to test its importance. We find that this ablation is unable to match the performance of the full method, indicating that entropy is crucial in making sure that our method does not converge early to a suboptimal policy selector. A.6 ANALYZING META-POLICY In Figure 19 we analyze the behavior of the meta-policy in two separate runs. We evaluate on Task 3, since we find that our method is able to surpass the best individual reward function. This task assigns specific goals to each agent. As such, one might expect that independent exploration would work most effectively in this setting. While independent exploration is effective (see Figure 10), we find that our method can outperform it. In both runs, we find that burrowing rewards are selected when the agents finally learn how to solve the task; however, we find that burrowing rewards are not necessarily successful when deployed on their own. This lack of success is likely due to the fact that these rewards cause the agents to pick a region and commit to exploring it for the duration of training. As such, the agents may pick the ”wrong” region at first and never be able to recover. On the other hand, using our methods, the meta-policy can wait until the burrowing exploration regions align with the assigned rewards and then select the policies trained on these rewards. This usually ends up being more efficient than waiting for the agents to explore the whole map using independent rewards.
1. What is the main contribution of the paper, and how does it differ from prior works in multi-agent reinforcement learning? 2. How does the proposed method coordinate exploration efforts among agents, and what are the benefits of using a hierarchical setup and "joint" intrinsic rewards? 3. Can you provide examples or scenarios where changing the exploration strategy during an episode is necessary or beneficial? 4. How does the high-level policy evolve over time, and what is its role in providing a curriculum for the agents? 5. Are there any additional experiments or analyses that could help demonstrate the effectiveness and versatility of the proposed approach? 6. How does the paper's contribution compare to other works in the field, and how does it advance the state of the art in multi-agent reinforcement learning?
Review
Review Summary: The paper proposes a method for coordinating the exploration efforts of agents in a multi-agent reinforcement learning setting. The approach has two main components: (i) learning different exploration policies using different "joint" intrinsic rewards; and (ii) learning a higher-level policy that selects one of the exploration policies to be executed at the beginning of each episode. Each agent has its own novelty function which quantifies the novelty of observation seen by that agent. To coordinate exploration, these novelty functions are combined using aggregation functions to produce intrinsic reward for the agent. Each such aggregating function yields a different intrinsic reward. The authors propose several such aggregating functions as examples, however the method is applicable to other aggregating functions as well, as long as they can be computed off-policy. During training, the higher level policy selects one of the exploration policies which is then executed for the entire episode. The episode data is used in two ways: (i) to train the higher-level policy using policy gradients for maximizing extrinsic rewards along with an entropy term; and (ii) to train each exploration policy using soft actor-critic on its own intrinsic reward function (and extrinsic reward) in an off-policy manner. Experiments done on grid-world and VizDoom environment for three different tasks demonstrate that, on most tasks, the proposed approach performs at least as well as separately trained individual intrinsic rewards. Further ablation studies confirm that both the hierarchical setup and the "joint" intrinsic rewards are useful. Questions to the Authors: 1. The second sentence in section 5 is not clear - "Furthermore, the type of reward ... sufficiently complex". The high-level policy selects an exploration strategy at the beginning of each episode and then sticks to it for the entire duration of the episode. Changing the exploration strategy over the course of training might be useful in cases when agent needs to switch to a different exploration strategy after reaching a particular bottleneck state. However, this would require the exploration strategy to be changed in the middle of an episode which is not supported. Could you give an example where the exploration strategy must be changed over time even if one only selects the strategy at the beginning of each episode? Also, why not select the exploration strategy after every fixed number of time steps within each episode (by making high-level policy a function of the current state)? 2. Analyzing the role of high-level policy and its evolution over time on different tasks would be a very nice addition to the paper. Qualitative experiments demonstrating that it provides a curriculum which helps the agents in surpassing the performance of individual intrinsic rewards would be helpful. 3. Should \Pi in (10) also depend on i? Though paper is reasonably well written I find the contributions are very marginal. If authors can position the paper well with the existing literature and bring out the impact of the contributions it will be helpful.
ICLR
Title $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization Abstract Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose πBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, πBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when πBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that πBO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that πBO improves on the state-of-theart performance for a popular deep learning task, with a 12.5× time-to-accuracy speedup over prominent BO approaches. 1 INTRODUCTION The optimization of expensive black-box functions is a prominent task, arising across a wide range of applications. Bayesian optimization (BO) is a sample-efficient approach to cope with this task, and has been successfully applied to various problem settings, including hyperparameter optimization (HPO) (Snoek et al., 2012), neural architecture search (NAS) (Ru et al., 2021), joint NAS and HPO (Zimmer et al., 2021), algorithm configuration (Hutter et al., 2011), hardware design (Nardi et al., 2019), robotics (Calandra et al., 2014), and the game of Go (Chen et al., 2018). Despite the demonstrated effectiveness of BO for HPO (Bergstra et al., 2011; Turner et al., 2021), its adoption among practitioners remains limited. In a survey covering NeurIPS 2019 and ICLR 2020 (Bouthillier & Varoquaux, 2020), manual search was shown to be the most prevalent tuning method, with BO accounting for less than 7% of all tuning efforts. As the understanding of hyperparameter settings in deep learning (DL) models increase (Smith, 2018), so too does the tuning proficiency of practitioners (Anand et al., 2020). As previously displayed (Smith, 2018; Anand et al., 2020; Souza et al., 2021; Wang et al., 2019), this knowledge manifests in choosing single configurations or regions of hyperparameters that presumably yield good results, demonstrating a belief over the location of the optimum. BO’s deficit to properly incorporate said beliefs is a reason why practitioners prefer manual search to BO (Wang et al., 2019), despite its documented shortcomings (Bergstra & Bengio, 2012). To improve the usefulness of automated HPO approaches for ML practictioners, the ability to incorporate such knowledge is pivotal. Well-established BO frameworks (Snoek et al., 2012; Hutter et al., 2011; The GPyOpt authors, 2016; Kandasamy et al., 2020; Balandat et al., 2020) support user input to a limited extent, such as by biasing the initial design, or by narrowing the search space; however, this type of hard prior can lead to poor performance by missing important regions. BO also supports a prior over functions p(f) via the Gaussian Process kernel. However, this option for injecting knowledge is not aligned with the knowledge that experts possess: they often know which ranges of hyperparameter values tend to work best (Perrone et al., 2019; Smith, 2018; Wang et al., 2019), and are able to specify a probability distribution to quantify these priors. For example, many users of the Adam optimizer (Kingma & Ba, 2015) know that its best learning rate is often in the vicinity of 1× 10−3. In practice, DL experiments are typically conducted in a low-budget setting of less than 50 full model trainings (Bouthillier & Varoquaux, 2020). As such, practitioners want to exploit their knowledge efficiently without wasting early model trainings on configurations they expect to likely perform poorly. Unfortunately, this suits standard BO poorly, as BO requires a moderate number of function evaluations to learn about the response surface and make informed decisions that outperform random search. While there is a demand to increase knowledge injection possibilities to further the adoption of BO, the concept of encoding prior beliefs over the location of an optimum is still rather novel: while there are some initial works (Ramachandran et al., 2020; Li et al., 2020; Souza et al., 2021), no approach exists so far that allows the integration of arbitrary priors and offers flexibility in the choice of acquisition function; theory is also lacking. We close this gap by introducing a novel, remarkably simple, approach for injecting arbitrary prior beliefs into BO that is easy to implement, agnostic to the surrogate model used and converges at standard BO rates for any choice of prior. Our contributions After discussing our problem setting, related work, and background (Section 2), we make the following contributions: 1. We introduce πBO, a novel generalization of myopic acquisition functions that accounts for user-specified prior distributions over possible optima, is demonstrably simple-to-implement, and can be easily combined with arbitrary surrogate models (Section 3.1 & 3.2); 2. We formally prove that πBO inherits the theoretical properties of the well-established Expected Improvement acquisition function (Section 3.3); 3. We demonstrate on a broad range of established benchmarks and in DL case studies that πBO can yield 12.5× time-to-accuracy speedup over vanilla BO (Section 4). 2 BACKGROUND AND RELATED WORK 2.1 BLACK-BOX OPTIMIZATION We consider the problem of optimizing a black-box function f across a set of feasible inputs X ⊂ Rd: x∗ ∈ arg min x∈X f(x). (1) We assume that f(x) is expensive to evaluate, and can potentially only be observed through a noisy estimate, y. In this setting, we wish to minimize f in an efficient manner, typically adhering to a budget which sets a cap on the number of points that can be evaluated. Black-Box Optimization with Probabilistic User Beliefs In our work, we consider an augmented version of the optimization problem in Eq. (1), where we have access to user beliefs in the form of a probability distribution on the location of the optimum. Formally, we define the problem of black-box optimization with probabilistic user beliefs as solving Eq. (1), given a user-specified prior probability on the location of the optimum defined as π(x) = P ( f(x) = min x′∈X f(x′) ) , (2) where regions that the user expects to likely to contain an optimum will have a high value. We note that, without loss of generality, we require π to be strictly positive on all of X , i.e., any point in the search space might be an optimum. Since the user belief π(x) can be inaccurate or even misleading, optimizing Eq. (1) given (2) is a challenging problem. 2.2 BAYESIAN OPTIMIZATION We outline Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010; Shahriari et al., 2016b). Model BO aims to globally minimize f by an initial experimental design D0 = {(xi, yi)}Mi=1 and thereafter sequentially deciding on new points xn to form the data Dn = Dn−1 ∪ {(xn, yn)} for the n-th iteration with n ∈ {1 . . . N}. After each new observation, BO constructs a probabilistic surrogate model of f and uses that surrogate to evaluate an acquisition function α(x,Dn). The combination of surrogate model and acquisition function encodes the policy for selecting the next point xn+1. When constructing the surrogate, the most common choice is Gaussian processes (Rasmussen & Williams, 2006), which model f as p(f |Dn) = GP(m, k), with prior mean m (which is typically 0) and positive semi-definite covariance kernel k. The posterior mean mn and the variance s2n are mn(x) = kn(x) >(Kn + σ 2 nI)y, s 2 n(x) = k(x,x)− kn(x)>(Kn + σ2nI)kn(x), (3) where (Kn)ij = k(xi,xj), kn(x) = [k(x,x1), . . . , k(x,xn)]> and σ2n is the estimation of the observation noise variance σ2. Alternative surrogate models include Random forests (Hutter et al., 2011) and Bayesian neural networks (Springenberg et al., 2016). Acquisition Functions To obtain new candidates to evaluate, BO employs a criterion, called an acquisition function, that encapsulates an explore-exploit trade-off. By maximizing this criterion at each iteration, one or more candidate point are obtained and added to observed data. Several acquisition functions are used in BO; the most common of these is Expected Improvement (EI) (Jones et al., 1998). For a noiseless function, EI selects the next point xn+1, where f∗n is the minimal objective function value observed by iteration n, as xn+1 ∈ arg max x∈X E [ [(f∗n − f(x)]+ ] = arg max x∈X Zsn(x)Φ(Z) + sn(x)φ(Z), (4) where Z = (f∗n −mn(x))/sn(x). Thus, EI provides a myopic strategy for determining promising points; it also comes with convergence guarantees (Bull, 2011). Similar myopic acquisition functions are Upper Confidence Bound (UCB) (Srinivas et al., 2012), Probability of Improvement (PI) (Jones, 2001; Kushner, 1964) and Thompson Sampling (TS) (Thompson, 1933). A different class of acquisition functions is based on non-myopic criteria, such as Entropy Search (Hennig & Schuler, 2012), Predictive Entropy Search (Hernández-Lobato et al., 2014) and Max-value Entropy Search (Wang & Jegelka, 2017), which select points to minimize the uncertainty about the optimum, and the Knowledge Gradient (Frazier et al., 2008), which aims to minimize the posterior mean of the surrogate at the subsequent iteration. Our work applies to all acquisition functions in the first class, and we leave its extension to those in the second class for future work. 2.3 RELATED WORK There are two main categories of approaches that exploit prior knowledge in BO: approaches that use records of previous experiments, and approaches that incorporate assumptions on the black-box function provided either directly or indirectly by the user. As πBO exploits prior knowledge from users, we briefly discuss approaches which utilize previous experiments, and then comprehensively discuss the literature on exploiting expert knowledge. Learning from Previous Experiments Transfer learning for BO aims to automatically extract and use knowledge from prior executions of BO. These executions can come, for example, from learning and optimizing the hyperparameters of a machine learning algorithm on different datasets (van Rijn & Hutter, 2018; Swersky et al., 2013; Wistuba et al., 2015; Perrone et al., 2019; Feurer et al., 2015; 2018), or from optimizing the hyperparameters at different development stages (Stoll et al., 2020). For a comprehensive overview of meta learning for hyperparameter optimization, please see the survey from Vanschoren (2018). In contrast to these transfer learning approaches, πBO and the related work discussed below does not hinge on the existence of previous experiments, and can therefore be applied more generally. Incorporating Expert Priors over Function Structure BO can leverage structural priors on how the objective function is expected to behave. Traditionally, this is done via the surrogate model’s prior over functions, e.g., the kernel of the GP. However, there are lines of work that explore additional structural priors for BO to leverage. For instance, both SMAC (Hutter et al., 2011) and iRace (LópezIbáñez et al., 2016) support structural priors in the form of log-transformations, Li et al. (2018) propose to use knowledge about the monotonicity of the objective function as a prior for BO, and Snoek et al. (2014) model non-stationary covariance between inputs by warping said inputs. Oh et al. (2018) and Siivola et al. (2018) both propose structural priors tailored to high-dimensional problems, addressing the issue of over-exploring the boundary described by Swersky (2017). Oh et al. (2018) propose a cylindrical kernel that expands the center of the search space and shrinks the edges, while Siivola et al. (2018) propose adding derivative signs to the edges of the search space to steer BO towards the center. Lastly, Shahriari et al. (2016a) propose a BO algorithm for unbounded search spaces which uses a regularizer to penalize points based on their distance to the center of the user-defined search space. All of these approaches incorporate prior information on specific properties of the function or search space, and are thus not always applicable. Moreover, they do not generally direct the search to desired regions of the search space, offering the user little control over the selection of points to evaluate. Incorporating Expert Priors over Function Optimum Few previous works have proposed to inject explicit prior distributions over the location of an optimum into BO. In these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. Bergstra et al. (2011) suggest an approach that supports prior beliefs from a fixed set of distributions. However, this approach cannot be combined with standard acquisition functions. BOPrO (Souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. From the pseudo-posterior, configurations are selected using the EI acquisition function, using the formulation in Bergstra et al. (2011). While BOPrO is able to recover from misleading priors, its design restricts it to only use EI. Moreover, it does not provide the convergence guarantees of πBO. Li et al. (2020) propose to infer a posterior conditioned on both the observed data and the user prior through repeated Thompson sampling and maximization under the prior. This method displays robustness against misleading priors but lacks in empirical performance. Additionally, it is restricted to only one specific acquisition function. Ramachandran et al. (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. While the approach is model- and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors. In Section 4, we demonstrate that πBO compares favorably for priors over the function optimum, and shows improved empirical performance. Additionally, we do a complete comparison of all approaches in Appendix C. In summary, πBO sets itself apart from the methods above by being simpler (and thus easier to implement in different frameworks), flexible with regard to different acquisition functions and different surrogate models, the availability of theoretical guarantees, and, as we demonstrate in Section 4, better empirical results. 3 METHODOLOGY We now present πBO, which allows users to specify their belief about the location of the optimum through any probability distribution. A conceptually simple approach, πBO can be easily implemented in existing BO frameworks and can be combined directly with the myopic acquisition functions listed above. πBO augments an acquisition function to emphasize promising regions under the prior, ensuring such regions are to be explored frequently. As optimization progresses, the πBO strategy increasingly resembles that of vanilla BO, retaining its standard convergence rates (see Section 3.3). πBO is publicly available as part of the SMAC (https://github.com/automl/SMAC3) and HyperMapper (https://github.com/luinardi/hypermapper) HPO frameworks. 3.1 PRIOR-WEIGHTED ACQUISITION FUNCTION In πBO, we consider π(x) in Eq. (2) to be a weighting scheme on points in X . The heuristic provided by an acquisition function α(x,Dn), such as EI in Eq. (2.2), can then be combined with said weighting scheme to form a prior-weighted version of the acquisition function. The resulting strategy then becomes: xn ∈ arg max x∈X α(x,Dn)π(x). (5) This emphasizes good points under π(x) throughout the optimization. While this property is suitable for well-located priors π, it risks incurring a substantial slowdown for poorly-chosen priors; we will now show how to counter this by decaying the prior over time. 3.2 DECAYING PRIOR-WEIGHTED ACQUISITION FUNCTION As the optimization progresses, we should increasingly trust the surrogate model over the prior; the model improves with data while the user prior remains fixed. This cannot be achieved with the formulation in Eq. (5), as poorly-chosen priors would permanently slow down the optimization. Rather, to accomplish this desired behaviour, the influence of the prior needs to decay over time. Building on the approaches of Lee et al. (2020) and Souza et al. (2021), we accomplish this by raising the prior to a power γn ∈ R+, which decays towards zero with growing n. Thus, the resulting prior πn(x) = π(x)γn reflects a belief on the location of an optimum that gets weaker with time, converging towards a uniform distribution. We set γn = β/n, where β ∈ R+ is a hyperparameter set by the user, reflecting their confidence in π(x). We provide a sensitivity study on β in Appendix A. For a given acquisition function α(x,Dn) and user-specified prior π(x), we define the decaying prior-weighted acquisition function at iteration n as απ,n(x,Dn) ∆ = α(x,Dn)πn(x) ∆ = α(x,Dn)π(x)β/n (6) and its accompanying strategy as the maximizer of απ,n. With the acquisition function in Eq. (6), the prior will assume large importance initially, promoting the selection of points close to the prior mode. With time, the exponent on the prior will tend to zero, making the prior tend to uniform. Thus, with increasing n, the point selection of απ,n becomes increasingly similar to that of α. Algorithm 1 displays the simplicity of the new strategy, highlighting the required one-line change (Line 6) in the main BO loop. In Line 3, the mode of the prior is used as a first initial sample if available. Otherwise, only sampling is used for initialization. Algorithm 1 πBO Algorithm 1: Input: Input space X , prior distribution over optimum π(x), prior confidence parameter β, size M of the initial design, max number of optimization iterations N . 2: Output: Optimized design x∗. 3: {xi}Mi=1 ∼ π(x), {yi ← f(xi) + i}Mi=1, i ∼ N(0, σ2) 4: D0 ← {(xi, yi)}Mi=1 5: for {n = 1, 2, . . . , N} do 6: xnew ← arg maxx∈X α(x,Dn−1)π(x)β/n 7: ynew ← f(xnew) + i 8: Dn ← Dn−1 ∪ {(xnew, ynew)} 9: end for 10: return x∗ ← arg min(xi,yi)∈DN yi To illustrate the behaviour of πBO, we consider a toy problem with Gaussian priors on three different locations of the 1D space (center, left and right) as displayed in Figure 1. We define a 1D-Log-Branin toy problem by setting the second dimension of the 2D Branin function to the global optimum x2 = 2.275 and optimizing for the first dimension. Initially (iteration 4 in the top row), πBO amplifies the acquisition function α in high-probability regions, putting a lot of trust in the prior. As the prior decays (iteration 6 and 8 in the middle and bottom rows, respectively), the influence of the prior on the point selection decreases. By later iterations, πBO has searched substantially around the prior mode, and moves gradually towards other parts of the search space. This is of particular importance for the scenarios in the right column, where πBO recovers from a misleading prior. In Appendix B, we show that πBO is applicable to different surrogate models and acquisition functions. 3.3 THEORETICAL ANALYSIS We now study the πBO method from a theoretical standpoint when paired with the EI acquisition function. For the full proof, we refer the reader to Appendix E. To provide convergence rates, we rely on the set of assumptions introduced by Bull (2011). These assumptions are satisfied for popular kernels like the Matérn (1960) class and the Gaussian kernel, which is obtained in the limit ν →∞, where the rate ν controls the smoothness of functions from the GP prior. Our theoretical results apply when both length scales ` and the global scale of variation σ are fixed; these results can then be extended to the case where the kernel hyperparameters are learned using Maximum Likelihood Estimation (MLE) following the same procedure as in Bull (2011) (Theorem 5). We define the loss over the ball BR for a function f of norm ||f ||H`(X ) ≤ R in the reproducing kernel Hilbert space (RKHS)H`(X ) given a symmetric positive-definite kernel K` as Ln(u,Dn,H`(X ), R) ∆ = sup ||f ||H`(X)≤R Euf [f(x∗n)−min f ], (7) where n is the optimization iteration and u a strategy. We focus on the strategy that maximizes EIπ , the prior-weighted EI, and show that the loss in Equation (7) can, at any iteration n, be bounded by the vanilla EI loss function. We refer to EIπ,n and EIn when we want to emphasize the iteration n for the acquisition functions EIπ and EI, respectively. Theorem 1. Given Dn, K`, π, β, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (8) Using Theorem 1, we obtain the convergence rate of EIπ . This trivially follows when considering the fraction of the losses in the limit and inserting the original convergence rate on EI as in Bull (2011): Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (9) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Thus, we determine that the weighting introduced by EIπ does not negatively impact the worst-case convergence rate. The short-term performance is controlled by the user in their choice of π(x) and β. This result is coherent with intuition, as a weaker prior or quicker decay will yield a short-term performance closer to that of EI. In contrast, a stronger prior or slower decay does not guarantee the same short-term performance, but can produce better empirical results, as shown in Section 4. 4 RESULTS We empirically demonstrate the efficiency of πBO in three different settings. As πBO is a general method to augment acquisition functions, it can be implemented in different parent BO packages, and the implementation in any given package inherits the pros and cons of that package. To minimize confounding factors concerning this choice of parent package, we keep comparisons within the methods in one package where possible and provide results in the other packages in Appendix C. In Sec. 4.2, using Spearmint as a parent package, we evaluate πBO against three intuitive baselines to assess its performance and robustness on priors with different qualities, ranging from very accurate to purposefully detrimental. To this end, we use toy functions and cheap surrogates, where priors of known quality can be obtained. Next, in Sec. 4.3, we compare πBO against two competitive approaches (BOPrO and BOWS) that integrate priors over the optimum similarly to πBO, using HyperMapper (Nardi et al., 2019) as a parent framework, in which the most competitive baseline BOPrO is implemented. For these experiments we adopt a Multilayer Perceptron (MLP) benchmark on various datasets, using the interface provided by HPOBench (Eggensperger et al., 2021), with priors constructed around the defaults provided by the library. Lastly, in Sec. 4.4, we apply πBO and other approaches to two deep learning tasks, also using priors derived from publicly available defaults. Further, we demonstrate the flexibility of πBO in Appendix B, where we evaluate πBO in SMAC (Hutter et al., 2011; Lindauer et al., 2021) with random forests, as another framework with another surrogate model, and adapt it to use the UCB, TS and PI acquisition functions instead of EI. 4.1 EXPERIMENTAL SETUP Priors For our surrogate and toy function tasks, we follow the prior construction methodology in BOPrO (Souza et al., 2021) and create three main types of prior qualities, all Gaussian: strong, weak and wrong. The strong and weak priors are located to have a high and moderate density on the optimum, respectively. The wrong prior is a narrow distribution located in the worst region of the search space. For the OpenML MLP tuning benchmark, we utilize the defaults and search spaces provided in HPOBench (Eggensperger et al., 2021), and construct Gaussian priors for each hyperparameter with their mean on the default value, and a standard deviation of 25% of the hyperparameter’s domain. For the DL case studies, we utilize defaults from each task’s repository and, for numerical hyperparameters, once again set the standard deviation to 25% of the hyperparameter’s domain. For categorical hyperparameters, we place a higher probability on the default. As such, the quality of the prior is ultimately unknown, but serves as a proxy for what a practitioner may choose and has shown to be a reasonable choice (Anastacio & Hoos, 2020). For all experiments, we run πBO with β = N/10, where N is the total number of iterations, in order to make the prior influence approximately equal in all experiments, regardless of the number of allowed BO iterations. We investigate the sensitivity to β in Appendix A, and the sensitivity to prior quality in Appendix G. Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al., 2021) and BO in Warped Space (BOWS) (Ramachandran et al., 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design. The latter is initialized with the mode of the prior in addition to its regular initial design. In our main results, we choose Spearmint (with EI) (Snoek et al., 2012) for this mode-initialized baseline, simply referring to it as Spearmint. See Appendix F for complete details on the experiments. 4.2 ROBUSTNESS OF πBO First, we study the robustness of πBO. To this end, we show that πBO benefits from informative priors and can recover from wrong priors, being consistent with our theoretical results in Section 3.3. To this end, we consider a well-known black-box optimization function, Branin (2D), as well as two surrogate HPO tasks from the Profet suite (Klein et al., 2019): FC-Net (6D) and XGBoost (8D). For these tasks, we exemplarily show results for πBO implemented in the Spearmint framework. As Figure 2 shows, πBO is able to quickly improve over sampling from the prior. Moreover, it improves substantially over Spearmint (with mode initialization) for all informative priors, staying up to an order of magnitude ahead throughout the optimization for both strong and weak priors. For wrong priors, πBO displays desired robustness by recovering to approximately equal regret as Spearmint. In contrast, Spearmint frequently fails to substantially improve from its initial design on the strong and weak prior, which demonstrates the importance of considering the prior throughout the optimization procedure. This effect is even more pronounced on the higher-dimensional tasks FCNet and XGBoost, where BO typically spends many iterations at the boundary (Swersky, 2017). Here, πBO rapidly improves multiple orders of magnitude over the initial design, displaying its ability to efficiently exploit the information provided by the prior. 4.3 COMPARISON OF πBO AGAINST OTHER PRIOR-GUIDED APPROACHES Next, we study the performance of πBO against other state-of-the-art prior-guided approaches. To this end, we consider optimizing 5 hyperparameters of an MLP for classification (Eggensperger et al., 2021) on 6 different OpenML datasets (Vanschoren et al., 2014) and compare against BOPrO (Souza et al., 2021) and BOWS (Ramachandran et al., 2020). For minimizing confounding factors, we implement πBO and BOWS in HyperMapper (Nardi et al., 2019), the same framework that BOPrO runs on. Moreover, we let all approaches share πBO’s initialization procedure. We consider a budget of 50 iterations as it is common with ML practitioners (Bouthillier & Varoquaux, 2020). In Figure 3, we see that πBO offers the best performance on four out of six tasks, and displays the most consistent performance across tasks. In contrast to them BOWS and BOPrO, πBO also comes with theoretical guarantees and is flexible in the choice of framework and acquisition function. 4.4 CASE STUDIES ON DEEP LEARNING PIPELINES Last, we study the impact of πBO on deep learning applications, which are often fairly expensive, making efficiency even more important than in HPO for traditional machine learning. To this end, we consider two deep learning case studies: segmentation of neuronal processes in electron microscopy images with a U-Net(6D) (Ronneberger et al., 2015), with code provided from the NVIDIA deep learning examples repository (Przemek et al.), and image classification on ImageNette-128 (6D) (Howard, 2019), a light-weight adaptation of ImageNet (Deng et al., 2009), with code from the repository of the popular FastAI library (Howard et al., 2018). We mimic the setup from Section 4.3 by using the HyperMapper framework and identical initialization procedures across approaches. Gaussian priors are set on publicly available default values, which are results of previous tuning efforts of the original authors. We again optimize for a practical budget of 50 iterations (Bouthillier & Varoquaux, 2020). As test splits for both tasks were not available to us, we report validation scores. As shown in Figure 4, πBO achieves a 2.5× time-to-accuracy speedup over Vanilla BO. For ImageNette, the performance of πBO at iteration 4 already surpasses the performance of Vanilla BO at Iteration 50, demonstrating a 12.5× time-to-accuracy speedup. Ultimately, πBO’s final performance establishes a new state-of-the-art validation performance on ImageNette with the provided pipeline, with a final accuracy of 94.14% (vs. the previous state of the art with 93.55%1). 5 CONCLUSION AND FUTURE WORK We presented πBO, a conceptually very simple Bayesian optimization approach for leveraging user beliefs about the location of an optimum, which relies on a generalization of myopic acquisition functions. πBO modifies the selection of design points through a decaying weighting scheme, promoting high-probability regions under the prior. Contrary to previous approaches, πBO imposes only minor restrictions on the type of priors, surrogates or frameworks that can be used. πBO provably converges at regular rates, displays state-of-the art performance across tasks, and effectively recovers from poorly specified priors. Moreover, we have demonstrated that πBO can yield substantial performance gains for practical low-budget settings, improving on the state-of-the-art for a real-world CNN tuning tasks even with trivial choices for the prior. For practitioners who have historically relied on manual or grid search for HPO, we hope that πBO will serve as an intuitive and effective tool for bridging the gap between traditional tuning methods and BO. πBO sets the stage for several follow-up studies. Amongst others, we will examine the extension of πBO to non-myopic acquisition functions, such as entropy-based methods. Non-myopic acquisition functions do not fit well in the current πBO framework, as they do not necessarily benefit from evaluating inputs expected to perform well. We will also combine πBO with multi-fidelity optimization methods to yield even higher speedups, and with multi-objective optimization to jointly optimize performance and secondary objective functions, such as interpretability or fairness of models. 1https://github.com/fastai/imagenette#imagenette-leaderboard, 80 Epochs, 128 Resolution 6 ETHICS STATEMENT Our work proposes an acquisition function generalization which incorporates prior beliefs about the location of the optimum into optimization. The approach is foundational and thus will not bring direct societal or ethical consequences. However, πBO will likely be used in the development of applications for a wide range of areas and thus indirectly contribute to their impacts on society. In particular, we envision that πBO will impact a multitude of fields by allowing ML experts to inject their knowledge about the location of the optimum into Bayesian Optimization. We also note that we intend for πBO to be a tool that allows users to assist Bayesian Optimization by providing reasonable prior knowledge and beliefs. This process induces user bias into the optimization, as πBO will inevitably start by optimizing around this prior. As some users may only be interested in optimizing in the direct neighborhood of their prior, πBO could allow them to do so if provided with a high β value in relation to the number of iterations. Thus, if improperly specified, πBO could serve to reinforce user’s beliefs by providing improved solutions only for the user’s region of interest. However, if used properly, πBO will reduce the computational resources required to find strong hyperparameter settings, contributing to the sustainability of machine learning. 7 REPRODUCIBILITY In order to make the experiments run in πBO as reproducible as possible, we have included links to repositories of our implementations in both Spearmint and HyperMapper, with instructions on how to run our experiments. Moreover, we have included in said repositories all of the exact priors that we have used for our runs, which run out of the box. The priors we used were, in our opinion, well motivated as to avoid subjectivity, which we hope serves as a good frame of reference for similar works in the future. Specifically, Appendix 4.4 describes how we ran our DL experiments, Appendix F.1 goes into the implementation in further detail, and Appendix D displays the exact priors for all our experiments and prior strengths. Our Spearmint implementation of both πBO and BOWS is available at https://github.com/piboauthors/PiBO-Spearmint, and our HyperMapper implementation is available at https://github.com/piboauthors/ PiBO-Hypermapper. For our results on the convergence of πBO, we have provided a complete proof in Appendix E. 8 ACKNOWLEDGEMENTS Luigi Nardi was supported in part by affiliate members and other supporters of the Stanford DAWN project — Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Carl Hvarfner and Luigi Nardi were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza was supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721, through TAILOR, a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and by the state of BadenWürttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG. Marius Lindauer acknowledges support by the European Research Council (ERC) under the Europe Horizon programme. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973. A BETA ABLATION STUDY We consider the effect of the β hyperparameter of πBO introduced in Section 3.2, controlling the speed of the prior decay. To show the effect of this hyperparameter, we display the performance of πBO for the toy and surrogate-based benchmarks across all prior qualities. We emphasize the trade-off between high-end performance on good priors and robustness to bad priors. In general, a higher value of β yields better performance for good priors, but makes πBO slower to recover from bad priors. This behaviour follows intuition and the results provided in Section 3.3. In Figure 5, we display how πBO performs for different choices of β, and once again provide sampling from the prior and Spearmint as baselines. Following the prior decay parameter baseline by (Souza et al., 2021), we show that the choice of β = 10 onsistently gives one of the best performances for strong priors, while retaining good overall robustness. Nearly all choices of β give a final performance better than that of Spearmint for good priors. Additionally, there is a clear relationship between final performance and β on all good priors. This is best visualized in the weak XGBoost experiment, where the final performances are distinctly sorted by increasing β. Similar patterns are not as apparent in the final performance on wrong priors. This behaviour highlights the benefits of slowly decaying the prior. Overall, πBO is competitive for a wide range of β, but suffers slightly worse final performance on good priors for low values of β. B πBO VERSATILITY We show the versatility of πBO by implementing it in numerous variants of SMAC Hutter et al. (2011), a well-established HPO framework which supports both GP and RF surrogates, and a majority of the myopic acquisition functions mentioned in Section 2. We showcase the performance of πBO-EI, πBO-PI, πBO-UCB and πBO-TS on the general formulation of πBO with a GP surrogate, as well as πBO-EI with an RF surrogate, which requires a minor adaptation. B.1 GENERAL FORMULATION OF πBO To allow for the universality of πBO across several acquisition function, we must consider the various magnitudes of acquisition functions. As UCB and TS typically output values in the same order of magnitude and sign as the objective function, we do not want the behaviour of πBO to be affected by such variations. The solution to the problem referenced above is to add a simple affine transformation to the observations, {yi}ni=1, by subtracting by the incumbent, y∗n. As such, we consider at each time step not the original dataset, Dn = {(xi, yi)}ni=1, but the augmented dataset D̂n = {(xi, yi − y∗n)}ni=1. With this formulation, we get the desired scale- and sign-invariance in the UCB and TS acquisition functions, without changing the original strategy of any of the acquisition function. Notably, this change leaves prior-weighted EI and PI unaffected. B.2 RANDOM FOREST SURROGATE We now demonstrate πBO with a RF surrogate model. In the SMAC implementation of the RF surrogate, the model forms piece-wise constant mean and covariance functions. Naturally, this leads to the EI, PI or UCB acquisition function surface being piece-wise constant as well. Consequently, an acquisition function with a RF surrogate will typically have a region of global optima. The choice of the next design point is then selected uniformly at random among the candidate optima. We wish to retain this randomness when applying πBO. As such, we require the prior to be piece-wise constant, too. To do so, we employ a binning approach, that linearly rounds prior values after applying the decay term. The granularity of the binning decreases at the same rate as the prior, allowing the piece-wise constant regions of the prior grow in size as optimization progresses. In Figure 9, we demonstrate the importance of the piece-wise constant acquisition function by showing the point selection when applying a πBO with a continuous prior to an RF surrogate (left) and when applying the binning approach (right). Notably, the smooth prior on the left repeatedly proposes design points very close to previous points, as the prior forces the selection of points near the boundary of a promising region. Thus, the surrogate model rarely improves, and the optimization gets stuck at said boundary for multiple iterations at a time. This is best visualized at iteration 5 and 10, where similar points have been selected for all iterations in the time span. With the binned prior on the right, the selection of design points occurs randomly within a region, avoiding the static point selection and updating of non-modified approach. In Figure 8, we report the performance of πBO with a RF surrogate and the binning approach. This approach is competitive, as it provides substantial improvement over SMAC, improves over sampling from the prior, and quickly recovers from misleading priors. Notably, the binning is not required for discrete parameters, as the piece-wise constant property holds by default. Thus, this adaptation is only necessary for continuous parameters. C OTHER PRIOR-BASED APPROACHES We now demonstrate the performance of πBO for five different functions and HPO Surrogates: Branin, Hartmann-6, as well as three tasks from the Profet suite - SVM, FCNet and XGBoost. We compare all frameworks for priors over the optimum - namely BOPrO Souza et al. (2021), BOWS Ramachandran et al. (2020), TPE Bergstra et al. (2011), PS-G Li et al. (2020). The performance of πBO is shown on two different frameworks - Spearmint and Hypermapper - to allow for fair comparison and display cross-framework consistency. As BOWS is implemented in Spearmint and BOPrO in Hypermapper, they appear in the plots retaining to their framework. We display each approach with vanilla Spearmint/Hypermapper, with normal initialization, as an additional baseline. Moreover, we display the performance of πBO implemented in Spearmint, as well as Mode + Spearmint, on the MLP tuning tasks. D PRIOR CONSTRUCTION We now present the method by which we construct our priors. For the synthetic benchmarks, we mimic (Souza et al., 2021) by offsetting a Gaussian distribution from the optima. For our case studies, we choose a Gaussian prior with zero correlation between dimensions. This was required in order to have a simple, streamlined approach that was compatible with all frameworks. We constructed the priors once before conducting the experiments, and kept them fixed throughout. Synthetic and Surrogate-based HPO Benchmarks For these benchmarks, the approximate optima of all included functions could be obtained in advance, either analytically or empirically through extensive sampling. Thus, the correctness of the prior is ultimately known in advance. For a function of dimensionality d with optimum at x∗, the strong and weak prior qualities were constructed by using a quality-specific noise term = { i}di=1 and quality-specific standard deviation as a fraction of the search space. For the strong prior πs(x), we use a small standard deviation σs = 1% and construct the prior as πs(x) ∼ N (x∗ + , σs), i ∼ N (0, σs). (10) We construct the weak priors analogously by using a larger standard deviation σw = 10%. For our 20 runs of the strong and weak prior, this procedure yielded us 20 unique priors per quality type, with varying offsets from the true optimum. Additionally, the density on the optimum is substantially larger for the strong prior than the weak prior. No priors with a mean outside the search space were allowed, such priors were simply replaced. For Branin, we only considered one of the three Branin optima for this procedure, since not all included frameworks support multi-modal distributions. For the wrong prior, we construct it similarly to the strong prior, but around the empirical maximum, x∗̄, of the objective function in the search space. Since this point was far away from the optimum for all benchmarks, we did not add additional noise. So, the wrong prior πm is constructed as πm(x) ∼ N (x∗̄, σs), (11) which means that the wrong prior is identical across runs for a given benchmark. E PROOFS Here, we provide the complete proofs for the Theorem and Corollary introduced in 3.3. In addition, we provide insight into the interplay between β, the prior π, and the value of the derived bound Cπ,n. Theorem 1. Given Dn, K`, π, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (12) Proof. To bound the performance of EIπ to that of EI, we primarily need to consider Lemma 7 and Lemma 8 by Bull Bull (2011). In Lemma 7, it is stated that for any sequence of points {xi}ni=1, dimensionality d, kernel length scales `, and p ∈ N, the posterior variance s2n on xn+1 will, for a large value C, satisfy the following inequality at most p times, sn(xn+1; `) ≥ Cp−(ν∧1)/d(log p)γ , γ = { α, ν ≤ 1 0, ν > 1 . (13) Thus, we can bound the posterior variance by assuming a point in time np where Eq. 13 has held p times. We now consider Lemma 8 where, through a number of inequalities, EI is bounded by the actual improvement In max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ In + (R+ σ)s, (14) where In = (f(x∗n)−f(x))+, τ(z) = zΦ(z) +φ(z) and s = sn(xn; `). Since πBO re-weights EIn by πn, these bounds need adjustment to hold for EIπ,n. For the upper bound provided in Lemma 8, we make use of maxx∈X πn(x) to bound EIπ,n(x) for any point x ∈ X : EIπ,n(x) maxx∈X πn(x) = EIn(x)πn(x) maxx∈X πn(x) ≤ EIn(x) ≤ In + (R+ σ)s. (15) For the lower bounds, we instead rely on minx∈X πn(x) in a similar manner: max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ EIn(x)πn(x) minx∈X πn(x) = EIπ,n(x) minx∈X πn(x) . (16) Consequently, EIπ can be bounded by the actual improvement as min x∈X πn(x) max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIπ,n(x) ≤ max x∈X πn(x)(In + (R+ σ)s). (17) With these bounds in place, we consider the setting as in the proof for Theorem 2 in Bull Bull (2011), which proves an upper bound for the EI strategy in the fixed kernel parameters setting. At an iteration np, p ≤ np ≤ 3p, the posterior variance will be bounded by Cp−(ν∧1)/d(log p)γ . Furthermore, since In ≥ 0 and ||f ||H`(X )) ≤ R, we can bound the total improvement as∑ i Ii ≤ ∑ i f(x∗i )− f(x∗i+1) ≤ f(x∗1)−min f ≤ 2||f ||∞ ≤ 2R, (18) leaving us a maximum of p times that In ≥ 2Rp−1. Consequently, both the posterior variance s2np and the improvement Inp are bounded at np. For a future iteration n, 3p ≤ n ≤ 3(p+ 1), we use the bounds on EIπ , snp and Inp to obtain the bounds on the EIπ loss: Ln(EIπ,Dn,H`(X ), R) = f(x∗n)−min f ≤ f(x∗np)−min f ≤ EIπ,np(x ∗) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ EIπ,np(xn+1) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ maxx∈X πn(x) minx∈X πn(x) τ(R/σ) τ(−R/σ) ( Inp + (R+ σ)snp ) ≤ ( maxx∈X π(x) minx∈X π(x) )β/n τ(R/σ) τ(−R/σ) (2Rp−1 + (R+ σ)Cp−(ν∧1)/d(log p)γ), where the last inequality is a factor Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n larger than the bound on Ln(EI,Dn,H`(X ), R). Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (19) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Proof. We simply compute the fraction of the losses in the limit, lim n→∞ Ln(EIπ,Dn,H`(X ), R) Ln(EI,Dn,H`(X ), R) ≤ lim n→∞ ( maxx∈X π(x) minx∈X π(x) )β/n = 1. (20) E.1 SENSITIVITY ANALYSIS ON Cπ,n We now provide additional insight into how Cπ,n depends on the choices of prior and β made by the user. To do so, we consider a typical low-budget setting and display values of Cπ,n at iteration 50. We consider a one-dimensional search space where with a Gaussian prior located in the center of the search space. In the plot below, we display how the choice of σ, given as a percentage of the search space, and β, the prior confidence parameter, yield different values of Cπ,n. We see that, for approximately half of the space, the upper bound on the loss is at least 80% (bright green or yellow) of the upper bound of EI, and only a small region of very narrow priors (dark blue) give a low guaranteed convergence rate. F EXPERIMENT DETAILS F.1 FRAMEWORKS Our implementations of πBO require little change in the supporting frameworks, Spearmint and HyperMapper, and we stay as close to the default settings as possible for each framework. For both Spearmint and HyperMapper, we consider a Matérn 5/2 Kernel. For particularly strong priors, rounding errors can cause the prior to be zero in parts of the search space, potentially affecting πBO’s convergence properties. To avoid these rounding errors and ensure a strictly positive prior, we add a small constant, = 10−12, to the prior throughout the search space for all prior qualities. For the initial sampling from the prior, we truncate the distribution by disallowing sampled points from outside the search space, instead re-sampling such points. During optimization, we do not to explicitly truncate the prior, as points outside the search space are never considered during acquisition function maximization. Thus, the prior is effectively truncated to fit the search space without requiring additional consideration. To the best of our knowledge, there is no publicly available implementation of BOWS, so we reimplemented it in Spearmint. For the Spearmint implementation of BOWS, we provide warped versions of each benchmark, obtaining 20 unique warpings per prior quality and benchmark. We truncate the prior by restricting the warped search space to only include the region which maps back to the original search space through the inverted warping function. For all other approaches, we use the original, publicly available implementations. Notably, the available implementation of Hyperopt TPE does not support bounded search spaces under our priors; as a compromise, when asked to evaluate outside the search space we return an empirically obtained maximum on the objective function inside the search space. We use the search spaces, prior locations and descriptions used by (Souza et al., 2021) for the toy and surrogate HPO problems. We now provide additional details about the benchmarks and case study tasks used, their associated search spaces and priors, and the resources used to run these studies. F.2 BENCHMARKS AND CASE STUDIES Branin The Branin function is a well-known synthetic benchmark for optimization problems. The Branin function has two input dimensions and three global minima. Hartmann-6 The Hartmann-6 function is a well-known synthetic benchmark for optimization problems, which has one global optimum and six dimensions. SVM A hyperparameter-optimization benchmark in 2D based on Profet (Klein et al., 2019). This benchmark is generated by a generative meta-model built using a set of SVM classification models trained on 16 OpenML tasks. The benchmark has two input parameters, corresponding to SVM hyperparameters. FCNet A hyperparameter and architecture optimization benchmark in 6D based on Profet. The FC-Net benchmark is generated by a generative meta-model built using a set of feed-forward neural networks trained on the same 16 OpenML tasks as the SVM benchmark. The benchmark has six input parameters corresponding to network hyperparameters. XGBoost A hyperparameter-optimization benchmark in 8D based on Profet. The XGBoost benchmark is generated by a generative meta-model built using a set of XGBoost regression models in 11 UCI datasets. The benchmark has eight input parameters, corresponding to XGBoost hyperparameters. OpenML MLP The OpenML MLP tuning tasks are provided through HPOBenchEggensperger et al. (2021), and train binary classifiers on real-world datasets. The 5D parameter space consists of four continous parameters and one integer parameter. U-Net Medical The U-Net (Ronneberger et al., 2015) is a popular convolutional neural network architecture for image segmentation. We use the implementation and evaluation setting from the popular NVIDIA deep learning examples repository (Przemek et al.) to build a case study for optimizing hyperparameters for U-Net. The NVIDIA repository is aimed towards the segmentation of neuronal processes in electron microscopy images for the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010). We optimize 6 hyperparameters of the U-Net pipeline. ImageNette ImageNette (Howard, 2019) is a subset of 10 classes of ImageNet (Deng et al., 2009) and is primarily used for algorithm development for the popular FastAI library (Howard et al., 2018). The FastAI library contains a convolutional neural network pipeline for ImageNette, that is used by all competitors on the ImageNette leaderboard. We base our case study on the 80 epoch, 128 resolution setting of this leaderboard and optimize 6 of the hyperparameters of the FastAI ImageNette pipeline. F.3 SEARCH SPACES AND PRIORS The search spaces for each benchmark are summarized in Table 1 (Branin and Profet), Table 2 (OpenML MLP), and Table 3 (ImageNette and U-Net). For the Profet benchmarks, we report the original ranges and whether or not a log scale was used. However, in practice, Profet’s generative model transforms the range of all hyperparameters to a linear [0, 1] range. We use Emukit’s public implementation for these benchmarks (Paleyes et al., 2019). F.4 CASE STUDY DETAILS Training details deep learning case studies Both case studies are based on existing deep learning code, whose hyperparameters we vary according to the HPO. In both case studies, we enabled mixed precision training, and for ImageNette-128 to work in conjunction with Spearmint, we had to enable the MKL_SERVICE_FORCE_INTEL environment flag. For all further details, we refer to the supplementary material containing our code. Resources used for deep learning case studies For U-Net Medical we used one GeForce RTX 2080 Ti GPU, whereas for ImageNette-128 we used two GeForce RTX 2080 Ti GPU’s. Also, we used 4 cores and 8 cores respectively, of an AMD EPYC 7502 32-Core Processor. In Table 4 we list the GPU hours needed for running the deep learning case studies as well as the emitted CO2 equivalents. Assets deep learning case studies In addition to the assets we list in the main paper, the U-Net Medical code base we used employs the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010), which is available for for the purpose of generating or testing non-commercial image segmentation software. We include licenses of all existing code assets we used in the supplementary material containing our code. G SENSITIVITY TO PRIOR STRENGTH We investigate the performance of πBO when providing priors over the optimum of various qualities. To show the effect of decreasing the prior strength, a grid of prior qualities, with varying widths and offsets from the optimum, are provided. Thus, priors range from the strong prior in the results, to weak, correct priors and sharp, misplaced priors. From Figures 14- 18, it is shown that πBO provides substantial performance across most prior qualities for all benchmarks but Branin, and recoups its early losses on the worst priors in the bottom left corner. πBO demonstrates sensitivity to the width of the prior, as the optimization does not progress as quickly for well-located priors with a larger width. Additionally, πBO’s improvement over the Spearmint + Mode baseline is further emphasized, as this baseline often fails to meaningfully improve over the mode in early iterations.
1. What is the focus and contribution of the paper regarding Bayesian optimization? 2. What are the strengths of the proposed approach, particularly in terms of incorporating prior information? 3. What are the weaknesses of the paper, especially regarding the theoretical analysis and experimental studies? 4. How does the reviewer assess the significance and novelty of the paper's content? 5. Are there any questions or concerns regarding the scaling of the prior function and its impact on the acquisition function?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a method to incorporate prior information about the optimum in the standard Bayesian optimisation (BO) setting. The prior is specified as a smooth function over the range having high values where the experimenter think the optimum may exist with high probability and low values otherwise. This prior function is incorporated in the BO workflow through multiplication with the EI acquisition function. The authors show that when the prior function is decayed then this BO achieves the usual sublinear regret rate of a standard BO asymptotically. The synthetic experiments are extensive and convincing. The real case studies are limited, but sufficient. Review I am quite aware of research in BO and I think integrating prior information is an important aspect that has not been looked into much. Existing solutions are either restricted or do not admit a convergence analysis. In that respect, I quite like the idea of the paper. It's quite simple and yet lend itself to an analysis of the convergence. The experiments are sufficient in my opinion to expose the behaviour of the algorithm at different scenarios. Theoretical analysis is convincing. Although I should say that the upper bound can become uselessly loose if the prior is not designed properly. The constant C_{\pi, n} can be arbitrarily large if the prior is Gaussian like and narrow. This may make the anytime upper bound not useful. The authors should use this insight to restrict the shape of the prior function. Another limitation of the theoretical analysis is sticking to EI acquisition function. An analysis that admit GP-UCB acquisition function would have completed the work because unfortunately, EI is not proven to converge for noisy function case. Another important question is how do the authors propose to scale the prior function \pi so that can influence the acquisition function. The EI functions are notoriusly peaky in the sense that it can have sharp peaks among the valley of mostly small values. In that case, to make prior count, it needs to be scaled properly. I would like the authors comment on this aspect.
ICLR
Title $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization Abstract Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose πBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, πBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when πBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that πBO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that πBO improves on the state-of-theart performance for a popular deep learning task, with a 12.5× time-to-accuracy speedup over prominent BO approaches. 1 INTRODUCTION The optimization of expensive black-box functions is a prominent task, arising across a wide range of applications. Bayesian optimization (BO) is a sample-efficient approach to cope with this task, and has been successfully applied to various problem settings, including hyperparameter optimization (HPO) (Snoek et al., 2012), neural architecture search (NAS) (Ru et al., 2021), joint NAS and HPO (Zimmer et al., 2021), algorithm configuration (Hutter et al., 2011), hardware design (Nardi et al., 2019), robotics (Calandra et al., 2014), and the game of Go (Chen et al., 2018). Despite the demonstrated effectiveness of BO for HPO (Bergstra et al., 2011; Turner et al., 2021), its adoption among practitioners remains limited. In a survey covering NeurIPS 2019 and ICLR 2020 (Bouthillier & Varoquaux, 2020), manual search was shown to be the most prevalent tuning method, with BO accounting for less than 7% of all tuning efforts. As the understanding of hyperparameter settings in deep learning (DL) models increase (Smith, 2018), so too does the tuning proficiency of practitioners (Anand et al., 2020). As previously displayed (Smith, 2018; Anand et al., 2020; Souza et al., 2021; Wang et al., 2019), this knowledge manifests in choosing single configurations or regions of hyperparameters that presumably yield good results, demonstrating a belief over the location of the optimum. BO’s deficit to properly incorporate said beliefs is a reason why practitioners prefer manual search to BO (Wang et al., 2019), despite its documented shortcomings (Bergstra & Bengio, 2012). To improve the usefulness of automated HPO approaches for ML practictioners, the ability to incorporate such knowledge is pivotal. Well-established BO frameworks (Snoek et al., 2012; Hutter et al., 2011; The GPyOpt authors, 2016; Kandasamy et al., 2020; Balandat et al., 2020) support user input to a limited extent, such as by biasing the initial design, or by narrowing the search space; however, this type of hard prior can lead to poor performance by missing important regions. BO also supports a prior over functions p(f) via the Gaussian Process kernel. However, this option for injecting knowledge is not aligned with the knowledge that experts possess: they often know which ranges of hyperparameter values tend to work best (Perrone et al., 2019; Smith, 2018; Wang et al., 2019), and are able to specify a probability distribution to quantify these priors. For example, many users of the Adam optimizer (Kingma & Ba, 2015) know that its best learning rate is often in the vicinity of 1× 10−3. In practice, DL experiments are typically conducted in a low-budget setting of less than 50 full model trainings (Bouthillier & Varoquaux, 2020). As such, practitioners want to exploit their knowledge efficiently without wasting early model trainings on configurations they expect to likely perform poorly. Unfortunately, this suits standard BO poorly, as BO requires a moderate number of function evaluations to learn about the response surface and make informed decisions that outperform random search. While there is a demand to increase knowledge injection possibilities to further the adoption of BO, the concept of encoding prior beliefs over the location of an optimum is still rather novel: while there are some initial works (Ramachandran et al., 2020; Li et al., 2020; Souza et al., 2021), no approach exists so far that allows the integration of arbitrary priors and offers flexibility in the choice of acquisition function; theory is also lacking. We close this gap by introducing a novel, remarkably simple, approach for injecting arbitrary prior beliefs into BO that is easy to implement, agnostic to the surrogate model used and converges at standard BO rates for any choice of prior. Our contributions After discussing our problem setting, related work, and background (Section 2), we make the following contributions: 1. We introduce πBO, a novel generalization of myopic acquisition functions that accounts for user-specified prior distributions over possible optima, is demonstrably simple-to-implement, and can be easily combined with arbitrary surrogate models (Section 3.1 & 3.2); 2. We formally prove that πBO inherits the theoretical properties of the well-established Expected Improvement acquisition function (Section 3.3); 3. We demonstrate on a broad range of established benchmarks and in DL case studies that πBO can yield 12.5× time-to-accuracy speedup over vanilla BO (Section 4). 2 BACKGROUND AND RELATED WORK 2.1 BLACK-BOX OPTIMIZATION We consider the problem of optimizing a black-box function f across a set of feasible inputs X ⊂ Rd: x∗ ∈ arg min x∈X f(x). (1) We assume that f(x) is expensive to evaluate, and can potentially only be observed through a noisy estimate, y. In this setting, we wish to minimize f in an efficient manner, typically adhering to a budget which sets a cap on the number of points that can be evaluated. Black-Box Optimization with Probabilistic User Beliefs In our work, we consider an augmented version of the optimization problem in Eq. (1), where we have access to user beliefs in the form of a probability distribution on the location of the optimum. Formally, we define the problem of black-box optimization with probabilistic user beliefs as solving Eq. (1), given a user-specified prior probability on the location of the optimum defined as π(x) = P ( f(x) = min x′∈X f(x′) ) , (2) where regions that the user expects to likely to contain an optimum will have a high value. We note that, without loss of generality, we require π to be strictly positive on all of X , i.e., any point in the search space might be an optimum. Since the user belief π(x) can be inaccurate or even misleading, optimizing Eq. (1) given (2) is a challenging problem. 2.2 BAYESIAN OPTIMIZATION We outline Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010; Shahriari et al., 2016b). Model BO aims to globally minimize f by an initial experimental design D0 = {(xi, yi)}Mi=1 and thereafter sequentially deciding on new points xn to form the data Dn = Dn−1 ∪ {(xn, yn)} for the n-th iteration with n ∈ {1 . . . N}. After each new observation, BO constructs a probabilistic surrogate model of f and uses that surrogate to evaluate an acquisition function α(x,Dn). The combination of surrogate model and acquisition function encodes the policy for selecting the next point xn+1. When constructing the surrogate, the most common choice is Gaussian processes (Rasmussen & Williams, 2006), which model f as p(f |Dn) = GP(m, k), with prior mean m (which is typically 0) and positive semi-definite covariance kernel k. The posterior mean mn and the variance s2n are mn(x) = kn(x) >(Kn + σ 2 nI)y, s 2 n(x) = k(x,x)− kn(x)>(Kn + σ2nI)kn(x), (3) where (Kn)ij = k(xi,xj), kn(x) = [k(x,x1), . . . , k(x,xn)]> and σ2n is the estimation of the observation noise variance σ2. Alternative surrogate models include Random forests (Hutter et al., 2011) and Bayesian neural networks (Springenberg et al., 2016). Acquisition Functions To obtain new candidates to evaluate, BO employs a criterion, called an acquisition function, that encapsulates an explore-exploit trade-off. By maximizing this criterion at each iteration, one or more candidate point are obtained and added to observed data. Several acquisition functions are used in BO; the most common of these is Expected Improvement (EI) (Jones et al., 1998). For a noiseless function, EI selects the next point xn+1, where f∗n is the minimal objective function value observed by iteration n, as xn+1 ∈ arg max x∈X E [ [(f∗n − f(x)]+ ] = arg max x∈X Zsn(x)Φ(Z) + sn(x)φ(Z), (4) where Z = (f∗n −mn(x))/sn(x). Thus, EI provides a myopic strategy for determining promising points; it also comes with convergence guarantees (Bull, 2011). Similar myopic acquisition functions are Upper Confidence Bound (UCB) (Srinivas et al., 2012), Probability of Improvement (PI) (Jones, 2001; Kushner, 1964) and Thompson Sampling (TS) (Thompson, 1933). A different class of acquisition functions is based on non-myopic criteria, such as Entropy Search (Hennig & Schuler, 2012), Predictive Entropy Search (Hernández-Lobato et al., 2014) and Max-value Entropy Search (Wang & Jegelka, 2017), which select points to minimize the uncertainty about the optimum, and the Knowledge Gradient (Frazier et al., 2008), which aims to minimize the posterior mean of the surrogate at the subsequent iteration. Our work applies to all acquisition functions in the first class, and we leave its extension to those in the second class for future work. 2.3 RELATED WORK There are two main categories of approaches that exploit prior knowledge in BO: approaches that use records of previous experiments, and approaches that incorporate assumptions on the black-box function provided either directly or indirectly by the user. As πBO exploits prior knowledge from users, we briefly discuss approaches which utilize previous experiments, and then comprehensively discuss the literature on exploiting expert knowledge. Learning from Previous Experiments Transfer learning for BO aims to automatically extract and use knowledge from prior executions of BO. These executions can come, for example, from learning and optimizing the hyperparameters of a machine learning algorithm on different datasets (van Rijn & Hutter, 2018; Swersky et al., 2013; Wistuba et al., 2015; Perrone et al., 2019; Feurer et al., 2015; 2018), or from optimizing the hyperparameters at different development stages (Stoll et al., 2020). For a comprehensive overview of meta learning for hyperparameter optimization, please see the survey from Vanschoren (2018). In contrast to these transfer learning approaches, πBO and the related work discussed below does not hinge on the existence of previous experiments, and can therefore be applied more generally. Incorporating Expert Priors over Function Structure BO can leverage structural priors on how the objective function is expected to behave. Traditionally, this is done via the surrogate model’s prior over functions, e.g., the kernel of the GP. However, there are lines of work that explore additional structural priors for BO to leverage. For instance, both SMAC (Hutter et al., 2011) and iRace (LópezIbáñez et al., 2016) support structural priors in the form of log-transformations, Li et al. (2018) propose to use knowledge about the monotonicity of the objective function as a prior for BO, and Snoek et al. (2014) model non-stationary covariance between inputs by warping said inputs. Oh et al. (2018) and Siivola et al. (2018) both propose structural priors tailored to high-dimensional problems, addressing the issue of over-exploring the boundary described by Swersky (2017). Oh et al. (2018) propose a cylindrical kernel that expands the center of the search space and shrinks the edges, while Siivola et al. (2018) propose adding derivative signs to the edges of the search space to steer BO towards the center. Lastly, Shahriari et al. (2016a) propose a BO algorithm for unbounded search spaces which uses a regularizer to penalize points based on their distance to the center of the user-defined search space. All of these approaches incorporate prior information on specific properties of the function or search space, and are thus not always applicable. Moreover, they do not generally direct the search to desired regions of the search space, offering the user little control over the selection of points to evaluate. Incorporating Expert Priors over Function Optimum Few previous works have proposed to inject explicit prior distributions over the location of an optimum into BO. In these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. Bergstra et al. (2011) suggest an approach that supports prior beliefs from a fixed set of distributions. However, this approach cannot be combined with standard acquisition functions. BOPrO (Souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. From the pseudo-posterior, configurations are selected using the EI acquisition function, using the formulation in Bergstra et al. (2011). While BOPrO is able to recover from misleading priors, its design restricts it to only use EI. Moreover, it does not provide the convergence guarantees of πBO. Li et al. (2020) propose to infer a posterior conditioned on both the observed data and the user prior through repeated Thompson sampling and maximization under the prior. This method displays robustness against misleading priors but lacks in empirical performance. Additionally, it is restricted to only one specific acquisition function. Ramachandran et al. (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. While the approach is model- and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors. In Section 4, we demonstrate that πBO compares favorably for priors over the function optimum, and shows improved empirical performance. Additionally, we do a complete comparison of all approaches in Appendix C. In summary, πBO sets itself apart from the methods above by being simpler (and thus easier to implement in different frameworks), flexible with regard to different acquisition functions and different surrogate models, the availability of theoretical guarantees, and, as we demonstrate in Section 4, better empirical results. 3 METHODOLOGY We now present πBO, which allows users to specify their belief about the location of the optimum through any probability distribution. A conceptually simple approach, πBO can be easily implemented in existing BO frameworks and can be combined directly with the myopic acquisition functions listed above. πBO augments an acquisition function to emphasize promising regions under the prior, ensuring such regions are to be explored frequently. As optimization progresses, the πBO strategy increasingly resembles that of vanilla BO, retaining its standard convergence rates (see Section 3.3). πBO is publicly available as part of the SMAC (https://github.com/automl/SMAC3) and HyperMapper (https://github.com/luinardi/hypermapper) HPO frameworks. 3.1 PRIOR-WEIGHTED ACQUISITION FUNCTION In πBO, we consider π(x) in Eq. (2) to be a weighting scheme on points in X . The heuristic provided by an acquisition function α(x,Dn), such as EI in Eq. (2.2), can then be combined with said weighting scheme to form a prior-weighted version of the acquisition function. The resulting strategy then becomes: xn ∈ arg max x∈X α(x,Dn)π(x). (5) This emphasizes good points under π(x) throughout the optimization. While this property is suitable for well-located priors π, it risks incurring a substantial slowdown for poorly-chosen priors; we will now show how to counter this by decaying the prior over time. 3.2 DECAYING PRIOR-WEIGHTED ACQUISITION FUNCTION As the optimization progresses, we should increasingly trust the surrogate model over the prior; the model improves with data while the user prior remains fixed. This cannot be achieved with the formulation in Eq. (5), as poorly-chosen priors would permanently slow down the optimization. Rather, to accomplish this desired behaviour, the influence of the prior needs to decay over time. Building on the approaches of Lee et al. (2020) and Souza et al. (2021), we accomplish this by raising the prior to a power γn ∈ R+, which decays towards zero with growing n. Thus, the resulting prior πn(x) = π(x)γn reflects a belief on the location of an optimum that gets weaker with time, converging towards a uniform distribution. We set γn = β/n, where β ∈ R+ is a hyperparameter set by the user, reflecting their confidence in π(x). We provide a sensitivity study on β in Appendix A. For a given acquisition function α(x,Dn) and user-specified prior π(x), we define the decaying prior-weighted acquisition function at iteration n as απ,n(x,Dn) ∆ = α(x,Dn)πn(x) ∆ = α(x,Dn)π(x)β/n (6) and its accompanying strategy as the maximizer of απ,n. With the acquisition function in Eq. (6), the prior will assume large importance initially, promoting the selection of points close to the prior mode. With time, the exponent on the prior will tend to zero, making the prior tend to uniform. Thus, with increasing n, the point selection of απ,n becomes increasingly similar to that of α. Algorithm 1 displays the simplicity of the new strategy, highlighting the required one-line change (Line 6) in the main BO loop. In Line 3, the mode of the prior is used as a first initial sample if available. Otherwise, only sampling is used for initialization. Algorithm 1 πBO Algorithm 1: Input: Input space X , prior distribution over optimum π(x), prior confidence parameter β, size M of the initial design, max number of optimization iterations N . 2: Output: Optimized design x∗. 3: {xi}Mi=1 ∼ π(x), {yi ← f(xi) + i}Mi=1, i ∼ N(0, σ2) 4: D0 ← {(xi, yi)}Mi=1 5: for {n = 1, 2, . . . , N} do 6: xnew ← arg maxx∈X α(x,Dn−1)π(x)β/n 7: ynew ← f(xnew) + i 8: Dn ← Dn−1 ∪ {(xnew, ynew)} 9: end for 10: return x∗ ← arg min(xi,yi)∈DN yi To illustrate the behaviour of πBO, we consider a toy problem with Gaussian priors on three different locations of the 1D space (center, left and right) as displayed in Figure 1. We define a 1D-Log-Branin toy problem by setting the second dimension of the 2D Branin function to the global optimum x2 = 2.275 and optimizing for the first dimension. Initially (iteration 4 in the top row), πBO amplifies the acquisition function α in high-probability regions, putting a lot of trust in the prior. As the prior decays (iteration 6 and 8 in the middle and bottom rows, respectively), the influence of the prior on the point selection decreases. By later iterations, πBO has searched substantially around the prior mode, and moves gradually towards other parts of the search space. This is of particular importance for the scenarios in the right column, where πBO recovers from a misleading prior. In Appendix B, we show that πBO is applicable to different surrogate models and acquisition functions. 3.3 THEORETICAL ANALYSIS We now study the πBO method from a theoretical standpoint when paired with the EI acquisition function. For the full proof, we refer the reader to Appendix E. To provide convergence rates, we rely on the set of assumptions introduced by Bull (2011). These assumptions are satisfied for popular kernels like the Matérn (1960) class and the Gaussian kernel, which is obtained in the limit ν →∞, where the rate ν controls the smoothness of functions from the GP prior. Our theoretical results apply when both length scales ` and the global scale of variation σ are fixed; these results can then be extended to the case where the kernel hyperparameters are learned using Maximum Likelihood Estimation (MLE) following the same procedure as in Bull (2011) (Theorem 5). We define the loss over the ball BR for a function f of norm ||f ||H`(X ) ≤ R in the reproducing kernel Hilbert space (RKHS)H`(X ) given a symmetric positive-definite kernel K` as Ln(u,Dn,H`(X ), R) ∆ = sup ||f ||H`(X)≤R Euf [f(x∗n)−min f ], (7) where n is the optimization iteration and u a strategy. We focus on the strategy that maximizes EIπ , the prior-weighted EI, and show that the loss in Equation (7) can, at any iteration n, be bounded by the vanilla EI loss function. We refer to EIπ,n and EIn when we want to emphasize the iteration n for the acquisition functions EIπ and EI, respectively. Theorem 1. Given Dn, K`, π, β, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (8) Using Theorem 1, we obtain the convergence rate of EIπ . This trivially follows when considering the fraction of the losses in the limit and inserting the original convergence rate on EI as in Bull (2011): Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (9) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Thus, we determine that the weighting introduced by EIπ does not negatively impact the worst-case convergence rate. The short-term performance is controlled by the user in their choice of π(x) and β. This result is coherent with intuition, as a weaker prior or quicker decay will yield a short-term performance closer to that of EI. In contrast, a stronger prior or slower decay does not guarantee the same short-term performance, but can produce better empirical results, as shown in Section 4. 4 RESULTS We empirically demonstrate the efficiency of πBO in three different settings. As πBO is a general method to augment acquisition functions, it can be implemented in different parent BO packages, and the implementation in any given package inherits the pros and cons of that package. To minimize confounding factors concerning this choice of parent package, we keep comparisons within the methods in one package where possible and provide results in the other packages in Appendix C. In Sec. 4.2, using Spearmint as a parent package, we evaluate πBO against three intuitive baselines to assess its performance and robustness on priors with different qualities, ranging from very accurate to purposefully detrimental. To this end, we use toy functions and cheap surrogates, where priors of known quality can be obtained. Next, in Sec. 4.3, we compare πBO against two competitive approaches (BOPrO and BOWS) that integrate priors over the optimum similarly to πBO, using HyperMapper (Nardi et al., 2019) as a parent framework, in which the most competitive baseline BOPrO is implemented. For these experiments we adopt a Multilayer Perceptron (MLP) benchmark on various datasets, using the interface provided by HPOBench (Eggensperger et al., 2021), with priors constructed around the defaults provided by the library. Lastly, in Sec. 4.4, we apply πBO and other approaches to two deep learning tasks, also using priors derived from publicly available defaults. Further, we demonstrate the flexibility of πBO in Appendix B, where we evaluate πBO in SMAC (Hutter et al., 2011; Lindauer et al., 2021) with random forests, as another framework with another surrogate model, and adapt it to use the UCB, TS and PI acquisition functions instead of EI. 4.1 EXPERIMENTAL SETUP Priors For our surrogate and toy function tasks, we follow the prior construction methodology in BOPrO (Souza et al., 2021) and create three main types of prior qualities, all Gaussian: strong, weak and wrong. The strong and weak priors are located to have a high and moderate density on the optimum, respectively. The wrong prior is a narrow distribution located in the worst region of the search space. For the OpenML MLP tuning benchmark, we utilize the defaults and search spaces provided in HPOBench (Eggensperger et al., 2021), and construct Gaussian priors for each hyperparameter with their mean on the default value, and a standard deviation of 25% of the hyperparameter’s domain. For the DL case studies, we utilize defaults from each task’s repository and, for numerical hyperparameters, once again set the standard deviation to 25% of the hyperparameter’s domain. For categorical hyperparameters, we place a higher probability on the default. As such, the quality of the prior is ultimately unknown, but serves as a proxy for what a practitioner may choose and has shown to be a reasonable choice (Anastacio & Hoos, 2020). For all experiments, we run πBO with β = N/10, where N is the total number of iterations, in order to make the prior influence approximately equal in all experiments, regardless of the number of allowed BO iterations. We investigate the sensitivity to β in Appendix A, and the sensitivity to prior quality in Appendix G. Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al., 2021) and BO in Warped Space (BOWS) (Ramachandran et al., 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design. The latter is initialized with the mode of the prior in addition to its regular initial design. In our main results, we choose Spearmint (with EI) (Snoek et al., 2012) for this mode-initialized baseline, simply referring to it as Spearmint. See Appendix F for complete details on the experiments. 4.2 ROBUSTNESS OF πBO First, we study the robustness of πBO. To this end, we show that πBO benefits from informative priors and can recover from wrong priors, being consistent with our theoretical results in Section 3.3. To this end, we consider a well-known black-box optimization function, Branin (2D), as well as two surrogate HPO tasks from the Profet suite (Klein et al., 2019): FC-Net (6D) and XGBoost (8D). For these tasks, we exemplarily show results for πBO implemented in the Spearmint framework. As Figure 2 shows, πBO is able to quickly improve over sampling from the prior. Moreover, it improves substantially over Spearmint (with mode initialization) for all informative priors, staying up to an order of magnitude ahead throughout the optimization for both strong and weak priors. For wrong priors, πBO displays desired robustness by recovering to approximately equal regret as Spearmint. In contrast, Spearmint frequently fails to substantially improve from its initial design on the strong and weak prior, which demonstrates the importance of considering the prior throughout the optimization procedure. This effect is even more pronounced on the higher-dimensional tasks FCNet and XGBoost, where BO typically spends many iterations at the boundary (Swersky, 2017). Here, πBO rapidly improves multiple orders of magnitude over the initial design, displaying its ability to efficiently exploit the information provided by the prior. 4.3 COMPARISON OF πBO AGAINST OTHER PRIOR-GUIDED APPROACHES Next, we study the performance of πBO against other state-of-the-art prior-guided approaches. To this end, we consider optimizing 5 hyperparameters of an MLP for classification (Eggensperger et al., 2021) on 6 different OpenML datasets (Vanschoren et al., 2014) and compare against BOPrO (Souza et al., 2021) and BOWS (Ramachandran et al., 2020). For minimizing confounding factors, we implement πBO and BOWS in HyperMapper (Nardi et al., 2019), the same framework that BOPrO runs on. Moreover, we let all approaches share πBO’s initialization procedure. We consider a budget of 50 iterations as it is common with ML practitioners (Bouthillier & Varoquaux, 2020). In Figure 3, we see that πBO offers the best performance on four out of six tasks, and displays the most consistent performance across tasks. In contrast to them BOWS and BOPrO, πBO also comes with theoretical guarantees and is flexible in the choice of framework and acquisition function. 4.4 CASE STUDIES ON DEEP LEARNING PIPELINES Last, we study the impact of πBO on deep learning applications, which are often fairly expensive, making efficiency even more important than in HPO for traditional machine learning. To this end, we consider two deep learning case studies: segmentation of neuronal processes in electron microscopy images with a U-Net(6D) (Ronneberger et al., 2015), with code provided from the NVIDIA deep learning examples repository (Przemek et al.), and image classification on ImageNette-128 (6D) (Howard, 2019), a light-weight adaptation of ImageNet (Deng et al., 2009), with code from the repository of the popular FastAI library (Howard et al., 2018). We mimic the setup from Section 4.3 by using the HyperMapper framework and identical initialization procedures across approaches. Gaussian priors are set on publicly available default values, which are results of previous tuning efforts of the original authors. We again optimize for a practical budget of 50 iterations (Bouthillier & Varoquaux, 2020). As test splits for both tasks were not available to us, we report validation scores. As shown in Figure 4, πBO achieves a 2.5× time-to-accuracy speedup over Vanilla BO. For ImageNette, the performance of πBO at iteration 4 already surpasses the performance of Vanilla BO at Iteration 50, demonstrating a 12.5× time-to-accuracy speedup. Ultimately, πBO’s final performance establishes a new state-of-the-art validation performance on ImageNette with the provided pipeline, with a final accuracy of 94.14% (vs. the previous state of the art with 93.55%1). 5 CONCLUSION AND FUTURE WORK We presented πBO, a conceptually very simple Bayesian optimization approach for leveraging user beliefs about the location of an optimum, which relies on a generalization of myopic acquisition functions. πBO modifies the selection of design points through a decaying weighting scheme, promoting high-probability regions under the prior. Contrary to previous approaches, πBO imposes only minor restrictions on the type of priors, surrogates or frameworks that can be used. πBO provably converges at regular rates, displays state-of-the art performance across tasks, and effectively recovers from poorly specified priors. Moreover, we have demonstrated that πBO can yield substantial performance gains for practical low-budget settings, improving on the state-of-the-art for a real-world CNN tuning tasks even with trivial choices for the prior. For practitioners who have historically relied on manual or grid search for HPO, we hope that πBO will serve as an intuitive and effective tool for bridging the gap between traditional tuning methods and BO. πBO sets the stage for several follow-up studies. Amongst others, we will examine the extension of πBO to non-myopic acquisition functions, such as entropy-based methods. Non-myopic acquisition functions do not fit well in the current πBO framework, as they do not necessarily benefit from evaluating inputs expected to perform well. We will also combine πBO with multi-fidelity optimization methods to yield even higher speedups, and with multi-objective optimization to jointly optimize performance and secondary objective functions, such as interpretability or fairness of models. 1https://github.com/fastai/imagenette#imagenette-leaderboard, 80 Epochs, 128 Resolution 6 ETHICS STATEMENT Our work proposes an acquisition function generalization which incorporates prior beliefs about the location of the optimum into optimization. The approach is foundational and thus will not bring direct societal or ethical consequences. However, πBO will likely be used in the development of applications for a wide range of areas and thus indirectly contribute to their impacts on society. In particular, we envision that πBO will impact a multitude of fields by allowing ML experts to inject their knowledge about the location of the optimum into Bayesian Optimization. We also note that we intend for πBO to be a tool that allows users to assist Bayesian Optimization by providing reasonable prior knowledge and beliefs. This process induces user bias into the optimization, as πBO will inevitably start by optimizing around this prior. As some users may only be interested in optimizing in the direct neighborhood of their prior, πBO could allow them to do so if provided with a high β value in relation to the number of iterations. Thus, if improperly specified, πBO could serve to reinforce user’s beliefs by providing improved solutions only for the user’s region of interest. However, if used properly, πBO will reduce the computational resources required to find strong hyperparameter settings, contributing to the sustainability of machine learning. 7 REPRODUCIBILITY In order to make the experiments run in πBO as reproducible as possible, we have included links to repositories of our implementations in both Spearmint and HyperMapper, with instructions on how to run our experiments. Moreover, we have included in said repositories all of the exact priors that we have used for our runs, which run out of the box. The priors we used were, in our opinion, well motivated as to avoid subjectivity, which we hope serves as a good frame of reference for similar works in the future. Specifically, Appendix 4.4 describes how we ran our DL experiments, Appendix F.1 goes into the implementation in further detail, and Appendix D displays the exact priors for all our experiments and prior strengths. Our Spearmint implementation of both πBO and BOWS is available at https://github.com/piboauthors/PiBO-Spearmint, and our HyperMapper implementation is available at https://github.com/piboauthors/ PiBO-Hypermapper. For our results on the convergence of πBO, we have provided a complete proof in Appendix E. 8 ACKNOWLEDGEMENTS Luigi Nardi was supported in part by affiliate members and other supporters of the Stanford DAWN project — Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Carl Hvarfner and Luigi Nardi were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza was supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721, through TAILOR, a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and by the state of BadenWürttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG. Marius Lindauer acknowledges support by the European Research Council (ERC) under the Europe Horizon programme. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973. A BETA ABLATION STUDY We consider the effect of the β hyperparameter of πBO introduced in Section 3.2, controlling the speed of the prior decay. To show the effect of this hyperparameter, we display the performance of πBO for the toy and surrogate-based benchmarks across all prior qualities. We emphasize the trade-off between high-end performance on good priors and robustness to bad priors. In general, a higher value of β yields better performance for good priors, but makes πBO slower to recover from bad priors. This behaviour follows intuition and the results provided in Section 3.3. In Figure 5, we display how πBO performs for different choices of β, and once again provide sampling from the prior and Spearmint as baselines. Following the prior decay parameter baseline by (Souza et al., 2021), we show that the choice of β = 10 onsistently gives one of the best performances for strong priors, while retaining good overall robustness. Nearly all choices of β give a final performance better than that of Spearmint for good priors. Additionally, there is a clear relationship between final performance and β on all good priors. This is best visualized in the weak XGBoost experiment, where the final performances are distinctly sorted by increasing β. Similar patterns are not as apparent in the final performance on wrong priors. This behaviour highlights the benefits of slowly decaying the prior. Overall, πBO is competitive for a wide range of β, but suffers slightly worse final performance on good priors for low values of β. B πBO VERSATILITY We show the versatility of πBO by implementing it in numerous variants of SMAC Hutter et al. (2011), a well-established HPO framework which supports both GP and RF surrogates, and a majority of the myopic acquisition functions mentioned in Section 2. We showcase the performance of πBO-EI, πBO-PI, πBO-UCB and πBO-TS on the general formulation of πBO with a GP surrogate, as well as πBO-EI with an RF surrogate, which requires a minor adaptation. B.1 GENERAL FORMULATION OF πBO To allow for the universality of πBO across several acquisition function, we must consider the various magnitudes of acquisition functions. As UCB and TS typically output values in the same order of magnitude and sign as the objective function, we do not want the behaviour of πBO to be affected by such variations. The solution to the problem referenced above is to add a simple affine transformation to the observations, {yi}ni=1, by subtracting by the incumbent, y∗n. As such, we consider at each time step not the original dataset, Dn = {(xi, yi)}ni=1, but the augmented dataset D̂n = {(xi, yi − y∗n)}ni=1. With this formulation, we get the desired scale- and sign-invariance in the UCB and TS acquisition functions, without changing the original strategy of any of the acquisition function. Notably, this change leaves prior-weighted EI and PI unaffected. B.2 RANDOM FOREST SURROGATE We now demonstrate πBO with a RF surrogate model. In the SMAC implementation of the RF surrogate, the model forms piece-wise constant mean and covariance functions. Naturally, this leads to the EI, PI or UCB acquisition function surface being piece-wise constant as well. Consequently, an acquisition function with a RF surrogate will typically have a region of global optima. The choice of the next design point is then selected uniformly at random among the candidate optima. We wish to retain this randomness when applying πBO. As such, we require the prior to be piece-wise constant, too. To do so, we employ a binning approach, that linearly rounds prior values after applying the decay term. The granularity of the binning decreases at the same rate as the prior, allowing the piece-wise constant regions of the prior grow in size as optimization progresses. In Figure 9, we demonstrate the importance of the piece-wise constant acquisition function by showing the point selection when applying a πBO with a continuous prior to an RF surrogate (left) and when applying the binning approach (right). Notably, the smooth prior on the left repeatedly proposes design points very close to previous points, as the prior forces the selection of points near the boundary of a promising region. Thus, the surrogate model rarely improves, and the optimization gets stuck at said boundary for multiple iterations at a time. This is best visualized at iteration 5 and 10, where similar points have been selected for all iterations in the time span. With the binned prior on the right, the selection of design points occurs randomly within a region, avoiding the static point selection and updating of non-modified approach. In Figure 8, we report the performance of πBO with a RF surrogate and the binning approach. This approach is competitive, as it provides substantial improvement over SMAC, improves over sampling from the prior, and quickly recovers from misleading priors. Notably, the binning is not required for discrete parameters, as the piece-wise constant property holds by default. Thus, this adaptation is only necessary for continuous parameters. C OTHER PRIOR-BASED APPROACHES We now demonstrate the performance of πBO for five different functions and HPO Surrogates: Branin, Hartmann-6, as well as three tasks from the Profet suite - SVM, FCNet and XGBoost. We compare all frameworks for priors over the optimum - namely BOPrO Souza et al. (2021), BOWS Ramachandran et al. (2020), TPE Bergstra et al. (2011), PS-G Li et al. (2020). The performance of πBO is shown on two different frameworks - Spearmint and Hypermapper - to allow for fair comparison and display cross-framework consistency. As BOWS is implemented in Spearmint and BOPrO in Hypermapper, they appear in the plots retaining to their framework. We display each approach with vanilla Spearmint/Hypermapper, with normal initialization, as an additional baseline. Moreover, we display the performance of πBO implemented in Spearmint, as well as Mode + Spearmint, on the MLP tuning tasks. D PRIOR CONSTRUCTION We now present the method by which we construct our priors. For the synthetic benchmarks, we mimic (Souza et al., 2021) by offsetting a Gaussian distribution from the optima. For our case studies, we choose a Gaussian prior with zero correlation between dimensions. This was required in order to have a simple, streamlined approach that was compatible with all frameworks. We constructed the priors once before conducting the experiments, and kept them fixed throughout. Synthetic and Surrogate-based HPO Benchmarks For these benchmarks, the approximate optima of all included functions could be obtained in advance, either analytically or empirically through extensive sampling. Thus, the correctness of the prior is ultimately known in advance. For a function of dimensionality d with optimum at x∗, the strong and weak prior qualities were constructed by using a quality-specific noise term = { i}di=1 and quality-specific standard deviation as a fraction of the search space. For the strong prior πs(x), we use a small standard deviation σs = 1% and construct the prior as πs(x) ∼ N (x∗ + , σs), i ∼ N (0, σs). (10) We construct the weak priors analogously by using a larger standard deviation σw = 10%. For our 20 runs of the strong and weak prior, this procedure yielded us 20 unique priors per quality type, with varying offsets from the true optimum. Additionally, the density on the optimum is substantially larger for the strong prior than the weak prior. No priors with a mean outside the search space were allowed, such priors were simply replaced. For Branin, we only considered one of the three Branin optima for this procedure, since not all included frameworks support multi-modal distributions. For the wrong prior, we construct it similarly to the strong prior, but around the empirical maximum, x∗̄, of the objective function in the search space. Since this point was far away from the optimum for all benchmarks, we did not add additional noise. So, the wrong prior πm is constructed as πm(x) ∼ N (x∗̄, σs), (11) which means that the wrong prior is identical across runs for a given benchmark. E PROOFS Here, we provide the complete proofs for the Theorem and Corollary introduced in 3.3. In addition, we provide insight into the interplay between β, the prior π, and the value of the derived bound Cπ,n. Theorem 1. Given Dn, K`, π, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (12) Proof. To bound the performance of EIπ to that of EI, we primarily need to consider Lemma 7 and Lemma 8 by Bull Bull (2011). In Lemma 7, it is stated that for any sequence of points {xi}ni=1, dimensionality d, kernel length scales `, and p ∈ N, the posterior variance s2n on xn+1 will, for a large value C, satisfy the following inequality at most p times, sn(xn+1; `) ≥ Cp−(ν∧1)/d(log p)γ , γ = { α, ν ≤ 1 0, ν > 1 . (13) Thus, we can bound the posterior variance by assuming a point in time np where Eq. 13 has held p times. We now consider Lemma 8 where, through a number of inequalities, EI is bounded by the actual improvement In max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ In + (R+ σ)s, (14) where In = (f(x∗n)−f(x))+, τ(z) = zΦ(z) +φ(z) and s = sn(xn; `). Since πBO re-weights EIn by πn, these bounds need adjustment to hold for EIπ,n. For the upper bound provided in Lemma 8, we make use of maxx∈X πn(x) to bound EIπ,n(x) for any point x ∈ X : EIπ,n(x) maxx∈X πn(x) = EIn(x)πn(x) maxx∈X πn(x) ≤ EIn(x) ≤ In + (R+ σ)s. (15) For the lower bounds, we instead rely on minx∈X πn(x) in a similar manner: max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ EIn(x)πn(x) minx∈X πn(x) = EIπ,n(x) minx∈X πn(x) . (16) Consequently, EIπ can be bounded by the actual improvement as min x∈X πn(x) max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIπ,n(x) ≤ max x∈X πn(x)(In + (R+ σ)s). (17) With these bounds in place, we consider the setting as in the proof for Theorem 2 in Bull Bull (2011), which proves an upper bound for the EI strategy in the fixed kernel parameters setting. At an iteration np, p ≤ np ≤ 3p, the posterior variance will be bounded by Cp−(ν∧1)/d(log p)γ . Furthermore, since In ≥ 0 and ||f ||H`(X )) ≤ R, we can bound the total improvement as∑ i Ii ≤ ∑ i f(x∗i )− f(x∗i+1) ≤ f(x∗1)−min f ≤ 2||f ||∞ ≤ 2R, (18) leaving us a maximum of p times that In ≥ 2Rp−1. Consequently, both the posterior variance s2np and the improvement Inp are bounded at np. For a future iteration n, 3p ≤ n ≤ 3(p+ 1), we use the bounds on EIπ , snp and Inp to obtain the bounds on the EIπ loss: Ln(EIπ,Dn,H`(X ), R) = f(x∗n)−min f ≤ f(x∗np)−min f ≤ EIπ,np(x ∗) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ EIπ,np(xn+1) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ maxx∈X πn(x) minx∈X πn(x) τ(R/σ) τ(−R/σ) ( Inp + (R+ σ)snp ) ≤ ( maxx∈X π(x) minx∈X π(x) )β/n τ(R/σ) τ(−R/σ) (2Rp−1 + (R+ σ)Cp−(ν∧1)/d(log p)γ), where the last inequality is a factor Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n larger than the bound on Ln(EI,Dn,H`(X ), R). Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (19) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Proof. We simply compute the fraction of the losses in the limit, lim n→∞ Ln(EIπ,Dn,H`(X ), R) Ln(EI,Dn,H`(X ), R) ≤ lim n→∞ ( maxx∈X π(x) minx∈X π(x) )β/n = 1. (20) E.1 SENSITIVITY ANALYSIS ON Cπ,n We now provide additional insight into how Cπ,n depends on the choices of prior and β made by the user. To do so, we consider a typical low-budget setting and display values of Cπ,n at iteration 50. We consider a one-dimensional search space where with a Gaussian prior located in the center of the search space. In the plot below, we display how the choice of σ, given as a percentage of the search space, and β, the prior confidence parameter, yield different values of Cπ,n. We see that, for approximately half of the space, the upper bound on the loss is at least 80% (bright green or yellow) of the upper bound of EI, and only a small region of very narrow priors (dark blue) give a low guaranteed convergence rate. F EXPERIMENT DETAILS F.1 FRAMEWORKS Our implementations of πBO require little change in the supporting frameworks, Spearmint and HyperMapper, and we stay as close to the default settings as possible for each framework. For both Spearmint and HyperMapper, we consider a Matérn 5/2 Kernel. For particularly strong priors, rounding errors can cause the prior to be zero in parts of the search space, potentially affecting πBO’s convergence properties. To avoid these rounding errors and ensure a strictly positive prior, we add a small constant, = 10−12, to the prior throughout the search space for all prior qualities. For the initial sampling from the prior, we truncate the distribution by disallowing sampled points from outside the search space, instead re-sampling such points. During optimization, we do not to explicitly truncate the prior, as points outside the search space are never considered during acquisition function maximization. Thus, the prior is effectively truncated to fit the search space without requiring additional consideration. To the best of our knowledge, there is no publicly available implementation of BOWS, so we reimplemented it in Spearmint. For the Spearmint implementation of BOWS, we provide warped versions of each benchmark, obtaining 20 unique warpings per prior quality and benchmark. We truncate the prior by restricting the warped search space to only include the region which maps back to the original search space through the inverted warping function. For all other approaches, we use the original, publicly available implementations. Notably, the available implementation of Hyperopt TPE does not support bounded search spaces under our priors; as a compromise, when asked to evaluate outside the search space we return an empirically obtained maximum on the objective function inside the search space. We use the search spaces, prior locations and descriptions used by (Souza et al., 2021) for the toy and surrogate HPO problems. We now provide additional details about the benchmarks and case study tasks used, their associated search spaces and priors, and the resources used to run these studies. F.2 BENCHMARKS AND CASE STUDIES Branin The Branin function is a well-known synthetic benchmark for optimization problems. The Branin function has two input dimensions and three global minima. Hartmann-6 The Hartmann-6 function is a well-known synthetic benchmark for optimization problems, which has one global optimum and six dimensions. SVM A hyperparameter-optimization benchmark in 2D based on Profet (Klein et al., 2019). This benchmark is generated by a generative meta-model built using a set of SVM classification models trained on 16 OpenML tasks. The benchmark has two input parameters, corresponding to SVM hyperparameters. FCNet A hyperparameter and architecture optimization benchmark in 6D based on Profet. The FC-Net benchmark is generated by a generative meta-model built using a set of feed-forward neural networks trained on the same 16 OpenML tasks as the SVM benchmark. The benchmark has six input parameters corresponding to network hyperparameters. XGBoost A hyperparameter-optimization benchmark in 8D based on Profet. The XGBoost benchmark is generated by a generative meta-model built using a set of XGBoost regression models in 11 UCI datasets. The benchmark has eight input parameters, corresponding to XGBoost hyperparameters. OpenML MLP The OpenML MLP tuning tasks are provided through HPOBenchEggensperger et al. (2021), and train binary classifiers on real-world datasets. The 5D parameter space consists of four continous parameters and one integer parameter. U-Net Medical The U-Net (Ronneberger et al., 2015) is a popular convolutional neural network architecture for image segmentation. We use the implementation and evaluation setting from the popular NVIDIA deep learning examples repository (Przemek et al.) to build a case study for optimizing hyperparameters for U-Net. The NVIDIA repository is aimed towards the segmentation of neuronal processes in electron microscopy images for the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010). We optimize 6 hyperparameters of the U-Net pipeline. ImageNette ImageNette (Howard, 2019) is a subset of 10 classes of ImageNet (Deng et al., 2009) and is primarily used for algorithm development for the popular FastAI library (Howard et al., 2018). The FastAI library contains a convolutional neural network pipeline for ImageNette, that is used by all competitors on the ImageNette leaderboard. We base our case study on the 80 epoch, 128 resolution setting of this leaderboard and optimize 6 of the hyperparameters of the FastAI ImageNette pipeline. F.3 SEARCH SPACES AND PRIORS The search spaces for each benchmark are summarized in Table 1 (Branin and Profet), Table 2 (OpenML MLP), and Table 3 (ImageNette and U-Net). For the Profet benchmarks, we report the original ranges and whether or not a log scale was used. However, in practice, Profet’s generative model transforms the range of all hyperparameters to a linear [0, 1] range. We use Emukit’s public implementation for these benchmarks (Paleyes et al., 2019). F.4 CASE STUDY DETAILS Training details deep learning case studies Both case studies are based on existing deep learning code, whose hyperparameters we vary according to the HPO. In both case studies, we enabled mixed precision training, and for ImageNette-128 to work in conjunction with Spearmint, we had to enable the MKL_SERVICE_FORCE_INTEL environment flag. For all further details, we refer to the supplementary material containing our code. Resources used for deep learning case studies For U-Net Medical we used one GeForce RTX 2080 Ti GPU, whereas for ImageNette-128 we used two GeForce RTX 2080 Ti GPU’s. Also, we used 4 cores and 8 cores respectively, of an AMD EPYC 7502 32-Core Processor. In Table 4 we list the GPU hours needed for running the deep learning case studies as well as the emitted CO2 equivalents. Assets deep learning case studies In addition to the assets we list in the main paper, the U-Net Medical code base we used employs the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010), which is available for for the purpose of generating or testing non-commercial image segmentation software. We include licenses of all existing code assets we used in the supplementary material containing our code. G SENSITIVITY TO PRIOR STRENGTH We investigate the performance of πBO when providing priors over the optimum of various qualities. To show the effect of decreasing the prior strength, a grid of prior qualities, with varying widths and offsets from the optimum, are provided. Thus, priors range from the strong prior in the results, to weak, correct priors and sharp, misplaced priors. From Figures 14- 18, it is shown that πBO provides substantial performance across most prior qualities for all benchmarks but Branin, and recoups its early losses on the worst priors in the bottom left corner. πBO demonstrates sensitivity to the width of the prior, as the optimization does not progress as quickly for well-located priors with a larger width. Additionally, πBO’s improvement over the Spearmint + Mode baseline is further emphasized, as this baseline often fails to meaningfully improve over the mode in early iterations.
1. What is the focus and contribution of the paper regarding Bayesian optimization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity, implementation, and convergence rate? 3. Do you have any concerns about the practicality of incorporating prior knowledge about optima location? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor errors or typos in the paper that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new method to incorporate prior knowledge about optima location in Bayesian optimization. The paper claims that the proposed method is simple to implement, can be used with any acquisition functions and has efficient convergence rate. Review Overall this paper is well-written and appropriately discusses the related literature. While the problem of incorporating the optimum location makes sense, I am not sure if the experts can provide a proper probability distribution about the optimum. Finding this information in practice seems rare. For example, even the experiments in section 4.4 rely on the priors which are based on already (manually) tuned deep networks. Finding useful priors may thus be hard in practice. The method seems simple and as stated in the paper is easy to implement. However, it seems to me that while the paper claims the proposed method to work with any acquisition function, it gives analysis only for EI acquisition function? Is this true? Also, there is a claim that the method can recover from any misleading prior, but it may be true only when there is a nonzero support for the true optimum x*. If the prior likelihood for x* is zero, the method may not converge. In case of misleading prior with nonzero support, is the convergence rate still sublinear? In Eq 4, what is y^*_{n+1}? It seems that left-hand side of the EI expression in Eq 4 does not depend on x, which is strange. In the experiments section, all the methods seem to be starting from different initialization points, if this is true then the comparison between the methods may not be fair. Why does \pi-BO often start from a higher starting value after the initial design? We do not know if \pi-BO does well due to the proposed algorithm or due to the better initialization? Minor Comments: On page 1: “documentesd” should be “documented” On page 4: in the third para, “show” should be “shows” On page 5: before Eq 6, “prior-weighed” should be “prior-weighted”
ICLR
Title $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization Abstract Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose πBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, πBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when πBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that πBO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that πBO improves on the state-of-theart performance for a popular deep learning task, with a 12.5× time-to-accuracy speedup over prominent BO approaches. 1 INTRODUCTION The optimization of expensive black-box functions is a prominent task, arising across a wide range of applications. Bayesian optimization (BO) is a sample-efficient approach to cope with this task, and has been successfully applied to various problem settings, including hyperparameter optimization (HPO) (Snoek et al., 2012), neural architecture search (NAS) (Ru et al., 2021), joint NAS and HPO (Zimmer et al., 2021), algorithm configuration (Hutter et al., 2011), hardware design (Nardi et al., 2019), robotics (Calandra et al., 2014), and the game of Go (Chen et al., 2018). Despite the demonstrated effectiveness of BO for HPO (Bergstra et al., 2011; Turner et al., 2021), its adoption among practitioners remains limited. In a survey covering NeurIPS 2019 and ICLR 2020 (Bouthillier & Varoquaux, 2020), manual search was shown to be the most prevalent tuning method, with BO accounting for less than 7% of all tuning efforts. As the understanding of hyperparameter settings in deep learning (DL) models increase (Smith, 2018), so too does the tuning proficiency of practitioners (Anand et al., 2020). As previously displayed (Smith, 2018; Anand et al., 2020; Souza et al., 2021; Wang et al., 2019), this knowledge manifests in choosing single configurations or regions of hyperparameters that presumably yield good results, demonstrating a belief over the location of the optimum. BO’s deficit to properly incorporate said beliefs is a reason why practitioners prefer manual search to BO (Wang et al., 2019), despite its documented shortcomings (Bergstra & Bengio, 2012). To improve the usefulness of automated HPO approaches for ML practictioners, the ability to incorporate such knowledge is pivotal. Well-established BO frameworks (Snoek et al., 2012; Hutter et al., 2011; The GPyOpt authors, 2016; Kandasamy et al., 2020; Balandat et al., 2020) support user input to a limited extent, such as by biasing the initial design, or by narrowing the search space; however, this type of hard prior can lead to poor performance by missing important regions. BO also supports a prior over functions p(f) via the Gaussian Process kernel. However, this option for injecting knowledge is not aligned with the knowledge that experts possess: they often know which ranges of hyperparameter values tend to work best (Perrone et al., 2019; Smith, 2018; Wang et al., 2019), and are able to specify a probability distribution to quantify these priors. For example, many users of the Adam optimizer (Kingma & Ba, 2015) know that its best learning rate is often in the vicinity of 1× 10−3. In practice, DL experiments are typically conducted in a low-budget setting of less than 50 full model trainings (Bouthillier & Varoquaux, 2020). As such, practitioners want to exploit their knowledge efficiently without wasting early model trainings on configurations they expect to likely perform poorly. Unfortunately, this suits standard BO poorly, as BO requires a moderate number of function evaluations to learn about the response surface and make informed decisions that outperform random search. While there is a demand to increase knowledge injection possibilities to further the adoption of BO, the concept of encoding prior beliefs over the location of an optimum is still rather novel: while there are some initial works (Ramachandran et al., 2020; Li et al., 2020; Souza et al., 2021), no approach exists so far that allows the integration of arbitrary priors and offers flexibility in the choice of acquisition function; theory is also lacking. We close this gap by introducing a novel, remarkably simple, approach for injecting arbitrary prior beliefs into BO that is easy to implement, agnostic to the surrogate model used and converges at standard BO rates for any choice of prior. Our contributions After discussing our problem setting, related work, and background (Section 2), we make the following contributions: 1. We introduce πBO, a novel generalization of myopic acquisition functions that accounts for user-specified prior distributions over possible optima, is demonstrably simple-to-implement, and can be easily combined with arbitrary surrogate models (Section 3.1 & 3.2); 2. We formally prove that πBO inherits the theoretical properties of the well-established Expected Improvement acquisition function (Section 3.3); 3. We demonstrate on a broad range of established benchmarks and in DL case studies that πBO can yield 12.5× time-to-accuracy speedup over vanilla BO (Section 4). 2 BACKGROUND AND RELATED WORK 2.1 BLACK-BOX OPTIMIZATION We consider the problem of optimizing a black-box function f across a set of feasible inputs X ⊂ Rd: x∗ ∈ arg min x∈X f(x). (1) We assume that f(x) is expensive to evaluate, and can potentially only be observed through a noisy estimate, y. In this setting, we wish to minimize f in an efficient manner, typically adhering to a budget which sets a cap on the number of points that can be evaluated. Black-Box Optimization with Probabilistic User Beliefs In our work, we consider an augmented version of the optimization problem in Eq. (1), where we have access to user beliefs in the form of a probability distribution on the location of the optimum. Formally, we define the problem of black-box optimization with probabilistic user beliefs as solving Eq. (1), given a user-specified prior probability on the location of the optimum defined as π(x) = P ( f(x) = min x′∈X f(x′) ) , (2) where regions that the user expects to likely to contain an optimum will have a high value. We note that, without loss of generality, we require π to be strictly positive on all of X , i.e., any point in the search space might be an optimum. Since the user belief π(x) can be inaccurate or even misleading, optimizing Eq. (1) given (2) is a challenging problem. 2.2 BAYESIAN OPTIMIZATION We outline Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010; Shahriari et al., 2016b). Model BO aims to globally minimize f by an initial experimental design D0 = {(xi, yi)}Mi=1 and thereafter sequentially deciding on new points xn to form the data Dn = Dn−1 ∪ {(xn, yn)} for the n-th iteration with n ∈ {1 . . . N}. After each new observation, BO constructs a probabilistic surrogate model of f and uses that surrogate to evaluate an acquisition function α(x,Dn). The combination of surrogate model and acquisition function encodes the policy for selecting the next point xn+1. When constructing the surrogate, the most common choice is Gaussian processes (Rasmussen & Williams, 2006), which model f as p(f |Dn) = GP(m, k), with prior mean m (which is typically 0) and positive semi-definite covariance kernel k. The posterior mean mn and the variance s2n are mn(x) = kn(x) >(Kn + σ 2 nI)y, s 2 n(x) = k(x,x)− kn(x)>(Kn + σ2nI)kn(x), (3) where (Kn)ij = k(xi,xj), kn(x) = [k(x,x1), . . . , k(x,xn)]> and σ2n is the estimation of the observation noise variance σ2. Alternative surrogate models include Random forests (Hutter et al., 2011) and Bayesian neural networks (Springenberg et al., 2016). Acquisition Functions To obtain new candidates to evaluate, BO employs a criterion, called an acquisition function, that encapsulates an explore-exploit trade-off. By maximizing this criterion at each iteration, one or more candidate point are obtained and added to observed data. Several acquisition functions are used in BO; the most common of these is Expected Improvement (EI) (Jones et al., 1998). For a noiseless function, EI selects the next point xn+1, where f∗n is the minimal objective function value observed by iteration n, as xn+1 ∈ arg max x∈X E [ [(f∗n − f(x)]+ ] = arg max x∈X Zsn(x)Φ(Z) + sn(x)φ(Z), (4) where Z = (f∗n −mn(x))/sn(x). Thus, EI provides a myopic strategy for determining promising points; it also comes with convergence guarantees (Bull, 2011). Similar myopic acquisition functions are Upper Confidence Bound (UCB) (Srinivas et al., 2012), Probability of Improvement (PI) (Jones, 2001; Kushner, 1964) and Thompson Sampling (TS) (Thompson, 1933). A different class of acquisition functions is based on non-myopic criteria, such as Entropy Search (Hennig & Schuler, 2012), Predictive Entropy Search (Hernández-Lobato et al., 2014) and Max-value Entropy Search (Wang & Jegelka, 2017), which select points to minimize the uncertainty about the optimum, and the Knowledge Gradient (Frazier et al., 2008), which aims to minimize the posterior mean of the surrogate at the subsequent iteration. Our work applies to all acquisition functions in the first class, and we leave its extension to those in the second class for future work. 2.3 RELATED WORK There are two main categories of approaches that exploit prior knowledge in BO: approaches that use records of previous experiments, and approaches that incorporate assumptions on the black-box function provided either directly or indirectly by the user. As πBO exploits prior knowledge from users, we briefly discuss approaches which utilize previous experiments, and then comprehensively discuss the literature on exploiting expert knowledge. Learning from Previous Experiments Transfer learning for BO aims to automatically extract and use knowledge from prior executions of BO. These executions can come, for example, from learning and optimizing the hyperparameters of a machine learning algorithm on different datasets (van Rijn & Hutter, 2018; Swersky et al., 2013; Wistuba et al., 2015; Perrone et al., 2019; Feurer et al., 2015; 2018), or from optimizing the hyperparameters at different development stages (Stoll et al., 2020). For a comprehensive overview of meta learning for hyperparameter optimization, please see the survey from Vanschoren (2018). In contrast to these transfer learning approaches, πBO and the related work discussed below does not hinge on the existence of previous experiments, and can therefore be applied more generally. Incorporating Expert Priors over Function Structure BO can leverage structural priors on how the objective function is expected to behave. Traditionally, this is done via the surrogate model’s prior over functions, e.g., the kernel of the GP. However, there are lines of work that explore additional structural priors for BO to leverage. For instance, both SMAC (Hutter et al., 2011) and iRace (LópezIbáñez et al., 2016) support structural priors in the form of log-transformations, Li et al. (2018) propose to use knowledge about the monotonicity of the objective function as a prior for BO, and Snoek et al. (2014) model non-stationary covariance between inputs by warping said inputs. Oh et al. (2018) and Siivola et al. (2018) both propose structural priors tailored to high-dimensional problems, addressing the issue of over-exploring the boundary described by Swersky (2017). Oh et al. (2018) propose a cylindrical kernel that expands the center of the search space and shrinks the edges, while Siivola et al. (2018) propose adding derivative signs to the edges of the search space to steer BO towards the center. Lastly, Shahriari et al. (2016a) propose a BO algorithm for unbounded search spaces which uses a regularizer to penalize points based on their distance to the center of the user-defined search space. All of these approaches incorporate prior information on specific properties of the function or search space, and are thus not always applicable. Moreover, they do not generally direct the search to desired regions of the search space, offering the user little control over the selection of points to evaluate. Incorporating Expert Priors over Function Optimum Few previous works have proposed to inject explicit prior distributions over the location of an optimum into BO. In these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. Bergstra et al. (2011) suggest an approach that supports prior beliefs from a fixed set of distributions. However, this approach cannot be combined with standard acquisition functions. BOPrO (Souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. From the pseudo-posterior, configurations are selected using the EI acquisition function, using the formulation in Bergstra et al. (2011). While BOPrO is able to recover from misleading priors, its design restricts it to only use EI. Moreover, it does not provide the convergence guarantees of πBO. Li et al. (2020) propose to infer a posterior conditioned on both the observed data and the user prior through repeated Thompson sampling and maximization under the prior. This method displays robustness against misleading priors but lacks in empirical performance. Additionally, it is restricted to only one specific acquisition function. Ramachandran et al. (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. While the approach is model- and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors. In Section 4, we demonstrate that πBO compares favorably for priors over the function optimum, and shows improved empirical performance. Additionally, we do a complete comparison of all approaches in Appendix C. In summary, πBO sets itself apart from the methods above by being simpler (and thus easier to implement in different frameworks), flexible with regard to different acquisition functions and different surrogate models, the availability of theoretical guarantees, and, as we demonstrate in Section 4, better empirical results. 3 METHODOLOGY We now present πBO, which allows users to specify their belief about the location of the optimum through any probability distribution. A conceptually simple approach, πBO can be easily implemented in existing BO frameworks and can be combined directly with the myopic acquisition functions listed above. πBO augments an acquisition function to emphasize promising regions under the prior, ensuring such regions are to be explored frequently. As optimization progresses, the πBO strategy increasingly resembles that of vanilla BO, retaining its standard convergence rates (see Section 3.3). πBO is publicly available as part of the SMAC (https://github.com/automl/SMAC3) and HyperMapper (https://github.com/luinardi/hypermapper) HPO frameworks. 3.1 PRIOR-WEIGHTED ACQUISITION FUNCTION In πBO, we consider π(x) in Eq. (2) to be a weighting scheme on points in X . The heuristic provided by an acquisition function α(x,Dn), such as EI in Eq. (2.2), can then be combined with said weighting scheme to form a prior-weighted version of the acquisition function. The resulting strategy then becomes: xn ∈ arg max x∈X α(x,Dn)π(x). (5) This emphasizes good points under π(x) throughout the optimization. While this property is suitable for well-located priors π, it risks incurring a substantial slowdown for poorly-chosen priors; we will now show how to counter this by decaying the prior over time. 3.2 DECAYING PRIOR-WEIGHTED ACQUISITION FUNCTION As the optimization progresses, we should increasingly trust the surrogate model over the prior; the model improves with data while the user prior remains fixed. This cannot be achieved with the formulation in Eq. (5), as poorly-chosen priors would permanently slow down the optimization. Rather, to accomplish this desired behaviour, the influence of the prior needs to decay over time. Building on the approaches of Lee et al. (2020) and Souza et al. (2021), we accomplish this by raising the prior to a power γn ∈ R+, which decays towards zero with growing n. Thus, the resulting prior πn(x) = π(x)γn reflects a belief on the location of an optimum that gets weaker with time, converging towards a uniform distribution. We set γn = β/n, where β ∈ R+ is a hyperparameter set by the user, reflecting their confidence in π(x). We provide a sensitivity study on β in Appendix A. For a given acquisition function α(x,Dn) and user-specified prior π(x), we define the decaying prior-weighted acquisition function at iteration n as απ,n(x,Dn) ∆ = α(x,Dn)πn(x) ∆ = α(x,Dn)π(x)β/n (6) and its accompanying strategy as the maximizer of απ,n. With the acquisition function in Eq. (6), the prior will assume large importance initially, promoting the selection of points close to the prior mode. With time, the exponent on the prior will tend to zero, making the prior tend to uniform. Thus, with increasing n, the point selection of απ,n becomes increasingly similar to that of α. Algorithm 1 displays the simplicity of the new strategy, highlighting the required one-line change (Line 6) in the main BO loop. In Line 3, the mode of the prior is used as a first initial sample if available. Otherwise, only sampling is used for initialization. Algorithm 1 πBO Algorithm 1: Input: Input space X , prior distribution over optimum π(x), prior confidence parameter β, size M of the initial design, max number of optimization iterations N . 2: Output: Optimized design x∗. 3: {xi}Mi=1 ∼ π(x), {yi ← f(xi) + i}Mi=1, i ∼ N(0, σ2) 4: D0 ← {(xi, yi)}Mi=1 5: for {n = 1, 2, . . . , N} do 6: xnew ← arg maxx∈X α(x,Dn−1)π(x)β/n 7: ynew ← f(xnew) + i 8: Dn ← Dn−1 ∪ {(xnew, ynew)} 9: end for 10: return x∗ ← arg min(xi,yi)∈DN yi To illustrate the behaviour of πBO, we consider a toy problem with Gaussian priors on three different locations of the 1D space (center, left and right) as displayed in Figure 1. We define a 1D-Log-Branin toy problem by setting the second dimension of the 2D Branin function to the global optimum x2 = 2.275 and optimizing for the first dimension. Initially (iteration 4 in the top row), πBO amplifies the acquisition function α in high-probability regions, putting a lot of trust in the prior. As the prior decays (iteration 6 and 8 in the middle and bottom rows, respectively), the influence of the prior on the point selection decreases. By later iterations, πBO has searched substantially around the prior mode, and moves gradually towards other parts of the search space. This is of particular importance for the scenarios in the right column, where πBO recovers from a misleading prior. In Appendix B, we show that πBO is applicable to different surrogate models and acquisition functions. 3.3 THEORETICAL ANALYSIS We now study the πBO method from a theoretical standpoint when paired with the EI acquisition function. For the full proof, we refer the reader to Appendix E. To provide convergence rates, we rely on the set of assumptions introduced by Bull (2011). These assumptions are satisfied for popular kernels like the Matérn (1960) class and the Gaussian kernel, which is obtained in the limit ν →∞, where the rate ν controls the smoothness of functions from the GP prior. Our theoretical results apply when both length scales ` and the global scale of variation σ are fixed; these results can then be extended to the case where the kernel hyperparameters are learned using Maximum Likelihood Estimation (MLE) following the same procedure as in Bull (2011) (Theorem 5). We define the loss over the ball BR for a function f of norm ||f ||H`(X ) ≤ R in the reproducing kernel Hilbert space (RKHS)H`(X ) given a symmetric positive-definite kernel K` as Ln(u,Dn,H`(X ), R) ∆ = sup ||f ||H`(X)≤R Euf [f(x∗n)−min f ], (7) where n is the optimization iteration and u a strategy. We focus on the strategy that maximizes EIπ , the prior-weighted EI, and show that the loss in Equation (7) can, at any iteration n, be bounded by the vanilla EI loss function. We refer to EIπ,n and EIn when we want to emphasize the iteration n for the acquisition functions EIπ and EI, respectively. Theorem 1. Given Dn, K`, π, β, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (8) Using Theorem 1, we obtain the convergence rate of EIπ . This trivially follows when considering the fraction of the losses in the limit and inserting the original convergence rate on EI as in Bull (2011): Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (9) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Thus, we determine that the weighting introduced by EIπ does not negatively impact the worst-case convergence rate. The short-term performance is controlled by the user in their choice of π(x) and β. This result is coherent with intuition, as a weaker prior or quicker decay will yield a short-term performance closer to that of EI. In contrast, a stronger prior or slower decay does not guarantee the same short-term performance, but can produce better empirical results, as shown in Section 4. 4 RESULTS We empirically demonstrate the efficiency of πBO in three different settings. As πBO is a general method to augment acquisition functions, it can be implemented in different parent BO packages, and the implementation in any given package inherits the pros and cons of that package. To minimize confounding factors concerning this choice of parent package, we keep comparisons within the methods in one package where possible and provide results in the other packages in Appendix C. In Sec. 4.2, using Spearmint as a parent package, we evaluate πBO against three intuitive baselines to assess its performance and robustness on priors with different qualities, ranging from very accurate to purposefully detrimental. To this end, we use toy functions and cheap surrogates, where priors of known quality can be obtained. Next, in Sec. 4.3, we compare πBO against two competitive approaches (BOPrO and BOWS) that integrate priors over the optimum similarly to πBO, using HyperMapper (Nardi et al., 2019) as a parent framework, in which the most competitive baseline BOPrO is implemented. For these experiments we adopt a Multilayer Perceptron (MLP) benchmark on various datasets, using the interface provided by HPOBench (Eggensperger et al., 2021), with priors constructed around the defaults provided by the library. Lastly, in Sec. 4.4, we apply πBO and other approaches to two deep learning tasks, also using priors derived from publicly available defaults. Further, we demonstrate the flexibility of πBO in Appendix B, where we evaluate πBO in SMAC (Hutter et al., 2011; Lindauer et al., 2021) with random forests, as another framework with another surrogate model, and adapt it to use the UCB, TS and PI acquisition functions instead of EI. 4.1 EXPERIMENTAL SETUP Priors For our surrogate and toy function tasks, we follow the prior construction methodology in BOPrO (Souza et al., 2021) and create three main types of prior qualities, all Gaussian: strong, weak and wrong. The strong and weak priors are located to have a high and moderate density on the optimum, respectively. The wrong prior is a narrow distribution located in the worst region of the search space. For the OpenML MLP tuning benchmark, we utilize the defaults and search spaces provided in HPOBench (Eggensperger et al., 2021), and construct Gaussian priors for each hyperparameter with their mean on the default value, and a standard deviation of 25% of the hyperparameter’s domain. For the DL case studies, we utilize defaults from each task’s repository and, for numerical hyperparameters, once again set the standard deviation to 25% of the hyperparameter’s domain. For categorical hyperparameters, we place a higher probability on the default. As such, the quality of the prior is ultimately unknown, but serves as a proxy for what a practitioner may choose and has shown to be a reasonable choice (Anastacio & Hoos, 2020). For all experiments, we run πBO with β = N/10, where N is the total number of iterations, in order to make the prior influence approximately equal in all experiments, regardless of the number of allowed BO iterations. We investigate the sensitivity to β in Appendix A, and the sensitivity to prior quality in Appendix G. Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al., 2021) and BO in Warped Space (BOWS) (Ramachandran et al., 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design. The latter is initialized with the mode of the prior in addition to its regular initial design. In our main results, we choose Spearmint (with EI) (Snoek et al., 2012) for this mode-initialized baseline, simply referring to it as Spearmint. See Appendix F for complete details on the experiments. 4.2 ROBUSTNESS OF πBO First, we study the robustness of πBO. To this end, we show that πBO benefits from informative priors and can recover from wrong priors, being consistent with our theoretical results in Section 3.3. To this end, we consider a well-known black-box optimization function, Branin (2D), as well as two surrogate HPO tasks from the Profet suite (Klein et al., 2019): FC-Net (6D) and XGBoost (8D). For these tasks, we exemplarily show results for πBO implemented in the Spearmint framework. As Figure 2 shows, πBO is able to quickly improve over sampling from the prior. Moreover, it improves substantially over Spearmint (with mode initialization) for all informative priors, staying up to an order of magnitude ahead throughout the optimization for both strong and weak priors. For wrong priors, πBO displays desired robustness by recovering to approximately equal regret as Spearmint. In contrast, Spearmint frequently fails to substantially improve from its initial design on the strong and weak prior, which demonstrates the importance of considering the prior throughout the optimization procedure. This effect is even more pronounced on the higher-dimensional tasks FCNet and XGBoost, where BO typically spends many iterations at the boundary (Swersky, 2017). Here, πBO rapidly improves multiple orders of magnitude over the initial design, displaying its ability to efficiently exploit the information provided by the prior. 4.3 COMPARISON OF πBO AGAINST OTHER PRIOR-GUIDED APPROACHES Next, we study the performance of πBO against other state-of-the-art prior-guided approaches. To this end, we consider optimizing 5 hyperparameters of an MLP for classification (Eggensperger et al., 2021) on 6 different OpenML datasets (Vanschoren et al., 2014) and compare against BOPrO (Souza et al., 2021) and BOWS (Ramachandran et al., 2020). For minimizing confounding factors, we implement πBO and BOWS in HyperMapper (Nardi et al., 2019), the same framework that BOPrO runs on. Moreover, we let all approaches share πBO’s initialization procedure. We consider a budget of 50 iterations as it is common with ML practitioners (Bouthillier & Varoquaux, 2020). In Figure 3, we see that πBO offers the best performance on four out of six tasks, and displays the most consistent performance across tasks. In contrast to them BOWS and BOPrO, πBO also comes with theoretical guarantees and is flexible in the choice of framework and acquisition function. 4.4 CASE STUDIES ON DEEP LEARNING PIPELINES Last, we study the impact of πBO on deep learning applications, which are often fairly expensive, making efficiency even more important than in HPO for traditional machine learning. To this end, we consider two deep learning case studies: segmentation of neuronal processes in electron microscopy images with a U-Net(6D) (Ronneberger et al., 2015), with code provided from the NVIDIA deep learning examples repository (Przemek et al.), and image classification on ImageNette-128 (6D) (Howard, 2019), a light-weight adaptation of ImageNet (Deng et al., 2009), with code from the repository of the popular FastAI library (Howard et al., 2018). We mimic the setup from Section 4.3 by using the HyperMapper framework and identical initialization procedures across approaches. Gaussian priors are set on publicly available default values, which are results of previous tuning efforts of the original authors. We again optimize for a practical budget of 50 iterations (Bouthillier & Varoquaux, 2020). As test splits for both tasks were not available to us, we report validation scores. As shown in Figure 4, πBO achieves a 2.5× time-to-accuracy speedup over Vanilla BO. For ImageNette, the performance of πBO at iteration 4 already surpasses the performance of Vanilla BO at Iteration 50, demonstrating a 12.5× time-to-accuracy speedup. Ultimately, πBO’s final performance establishes a new state-of-the-art validation performance on ImageNette with the provided pipeline, with a final accuracy of 94.14% (vs. the previous state of the art with 93.55%1). 5 CONCLUSION AND FUTURE WORK We presented πBO, a conceptually very simple Bayesian optimization approach for leveraging user beliefs about the location of an optimum, which relies on a generalization of myopic acquisition functions. πBO modifies the selection of design points through a decaying weighting scheme, promoting high-probability regions under the prior. Contrary to previous approaches, πBO imposes only minor restrictions on the type of priors, surrogates or frameworks that can be used. πBO provably converges at regular rates, displays state-of-the art performance across tasks, and effectively recovers from poorly specified priors. Moreover, we have demonstrated that πBO can yield substantial performance gains for practical low-budget settings, improving on the state-of-the-art for a real-world CNN tuning tasks even with trivial choices for the prior. For practitioners who have historically relied on manual or grid search for HPO, we hope that πBO will serve as an intuitive and effective tool for bridging the gap between traditional tuning methods and BO. πBO sets the stage for several follow-up studies. Amongst others, we will examine the extension of πBO to non-myopic acquisition functions, such as entropy-based methods. Non-myopic acquisition functions do not fit well in the current πBO framework, as they do not necessarily benefit from evaluating inputs expected to perform well. We will also combine πBO with multi-fidelity optimization methods to yield even higher speedups, and with multi-objective optimization to jointly optimize performance and secondary objective functions, such as interpretability or fairness of models. 1https://github.com/fastai/imagenette#imagenette-leaderboard, 80 Epochs, 128 Resolution 6 ETHICS STATEMENT Our work proposes an acquisition function generalization which incorporates prior beliefs about the location of the optimum into optimization. The approach is foundational and thus will not bring direct societal or ethical consequences. However, πBO will likely be used in the development of applications for a wide range of areas and thus indirectly contribute to their impacts on society. In particular, we envision that πBO will impact a multitude of fields by allowing ML experts to inject their knowledge about the location of the optimum into Bayesian Optimization. We also note that we intend for πBO to be a tool that allows users to assist Bayesian Optimization by providing reasonable prior knowledge and beliefs. This process induces user bias into the optimization, as πBO will inevitably start by optimizing around this prior. As some users may only be interested in optimizing in the direct neighborhood of their prior, πBO could allow them to do so if provided with a high β value in relation to the number of iterations. Thus, if improperly specified, πBO could serve to reinforce user’s beliefs by providing improved solutions only for the user’s region of interest. However, if used properly, πBO will reduce the computational resources required to find strong hyperparameter settings, contributing to the sustainability of machine learning. 7 REPRODUCIBILITY In order to make the experiments run in πBO as reproducible as possible, we have included links to repositories of our implementations in both Spearmint and HyperMapper, with instructions on how to run our experiments. Moreover, we have included in said repositories all of the exact priors that we have used for our runs, which run out of the box. The priors we used were, in our opinion, well motivated as to avoid subjectivity, which we hope serves as a good frame of reference for similar works in the future. Specifically, Appendix 4.4 describes how we ran our DL experiments, Appendix F.1 goes into the implementation in further detail, and Appendix D displays the exact priors for all our experiments and prior strengths. Our Spearmint implementation of both πBO and BOWS is available at https://github.com/piboauthors/PiBO-Spearmint, and our HyperMapper implementation is available at https://github.com/piboauthors/ PiBO-Hypermapper. For our results on the convergence of πBO, we have provided a complete proof in Appendix E. 8 ACKNOWLEDGEMENTS Luigi Nardi was supported in part by affiliate members and other supporters of the Stanford DAWN project — Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Carl Hvarfner and Luigi Nardi were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza was supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721, through TAILOR, a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and by the state of BadenWürttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG. Marius Lindauer acknowledges support by the European Research Council (ERC) under the Europe Horizon programme. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973. A BETA ABLATION STUDY We consider the effect of the β hyperparameter of πBO introduced in Section 3.2, controlling the speed of the prior decay. To show the effect of this hyperparameter, we display the performance of πBO for the toy and surrogate-based benchmarks across all prior qualities. We emphasize the trade-off between high-end performance on good priors and robustness to bad priors. In general, a higher value of β yields better performance for good priors, but makes πBO slower to recover from bad priors. This behaviour follows intuition and the results provided in Section 3.3. In Figure 5, we display how πBO performs for different choices of β, and once again provide sampling from the prior and Spearmint as baselines. Following the prior decay parameter baseline by (Souza et al., 2021), we show that the choice of β = 10 onsistently gives one of the best performances for strong priors, while retaining good overall robustness. Nearly all choices of β give a final performance better than that of Spearmint for good priors. Additionally, there is a clear relationship between final performance and β on all good priors. This is best visualized in the weak XGBoost experiment, where the final performances are distinctly sorted by increasing β. Similar patterns are not as apparent in the final performance on wrong priors. This behaviour highlights the benefits of slowly decaying the prior. Overall, πBO is competitive for a wide range of β, but suffers slightly worse final performance on good priors for low values of β. B πBO VERSATILITY We show the versatility of πBO by implementing it in numerous variants of SMAC Hutter et al. (2011), a well-established HPO framework which supports both GP and RF surrogates, and a majority of the myopic acquisition functions mentioned in Section 2. We showcase the performance of πBO-EI, πBO-PI, πBO-UCB and πBO-TS on the general formulation of πBO with a GP surrogate, as well as πBO-EI with an RF surrogate, which requires a minor adaptation. B.1 GENERAL FORMULATION OF πBO To allow for the universality of πBO across several acquisition function, we must consider the various magnitudes of acquisition functions. As UCB and TS typically output values in the same order of magnitude and sign as the objective function, we do not want the behaviour of πBO to be affected by such variations. The solution to the problem referenced above is to add a simple affine transformation to the observations, {yi}ni=1, by subtracting by the incumbent, y∗n. As such, we consider at each time step not the original dataset, Dn = {(xi, yi)}ni=1, but the augmented dataset D̂n = {(xi, yi − y∗n)}ni=1. With this formulation, we get the desired scale- and sign-invariance in the UCB and TS acquisition functions, without changing the original strategy of any of the acquisition function. Notably, this change leaves prior-weighted EI and PI unaffected. B.2 RANDOM FOREST SURROGATE We now demonstrate πBO with a RF surrogate model. In the SMAC implementation of the RF surrogate, the model forms piece-wise constant mean and covariance functions. Naturally, this leads to the EI, PI or UCB acquisition function surface being piece-wise constant as well. Consequently, an acquisition function with a RF surrogate will typically have a region of global optima. The choice of the next design point is then selected uniformly at random among the candidate optima. We wish to retain this randomness when applying πBO. As such, we require the prior to be piece-wise constant, too. To do so, we employ a binning approach, that linearly rounds prior values after applying the decay term. The granularity of the binning decreases at the same rate as the prior, allowing the piece-wise constant regions of the prior grow in size as optimization progresses. In Figure 9, we demonstrate the importance of the piece-wise constant acquisition function by showing the point selection when applying a πBO with a continuous prior to an RF surrogate (left) and when applying the binning approach (right). Notably, the smooth prior on the left repeatedly proposes design points very close to previous points, as the prior forces the selection of points near the boundary of a promising region. Thus, the surrogate model rarely improves, and the optimization gets stuck at said boundary for multiple iterations at a time. This is best visualized at iteration 5 and 10, where similar points have been selected for all iterations in the time span. With the binned prior on the right, the selection of design points occurs randomly within a region, avoiding the static point selection and updating of non-modified approach. In Figure 8, we report the performance of πBO with a RF surrogate and the binning approach. This approach is competitive, as it provides substantial improvement over SMAC, improves over sampling from the prior, and quickly recovers from misleading priors. Notably, the binning is not required for discrete parameters, as the piece-wise constant property holds by default. Thus, this adaptation is only necessary for continuous parameters. C OTHER PRIOR-BASED APPROACHES We now demonstrate the performance of πBO for five different functions and HPO Surrogates: Branin, Hartmann-6, as well as three tasks from the Profet suite - SVM, FCNet and XGBoost. We compare all frameworks for priors over the optimum - namely BOPrO Souza et al. (2021), BOWS Ramachandran et al. (2020), TPE Bergstra et al. (2011), PS-G Li et al. (2020). The performance of πBO is shown on two different frameworks - Spearmint and Hypermapper - to allow for fair comparison and display cross-framework consistency. As BOWS is implemented in Spearmint and BOPrO in Hypermapper, they appear in the plots retaining to their framework. We display each approach with vanilla Spearmint/Hypermapper, with normal initialization, as an additional baseline. Moreover, we display the performance of πBO implemented in Spearmint, as well as Mode + Spearmint, on the MLP tuning tasks. D PRIOR CONSTRUCTION We now present the method by which we construct our priors. For the synthetic benchmarks, we mimic (Souza et al., 2021) by offsetting a Gaussian distribution from the optima. For our case studies, we choose a Gaussian prior with zero correlation between dimensions. This was required in order to have a simple, streamlined approach that was compatible with all frameworks. We constructed the priors once before conducting the experiments, and kept them fixed throughout. Synthetic and Surrogate-based HPO Benchmarks For these benchmarks, the approximate optima of all included functions could be obtained in advance, either analytically or empirically through extensive sampling. Thus, the correctness of the prior is ultimately known in advance. For a function of dimensionality d with optimum at x∗, the strong and weak prior qualities were constructed by using a quality-specific noise term = { i}di=1 and quality-specific standard deviation as a fraction of the search space. For the strong prior πs(x), we use a small standard deviation σs = 1% and construct the prior as πs(x) ∼ N (x∗ + , σs), i ∼ N (0, σs). (10) We construct the weak priors analogously by using a larger standard deviation σw = 10%. For our 20 runs of the strong and weak prior, this procedure yielded us 20 unique priors per quality type, with varying offsets from the true optimum. Additionally, the density on the optimum is substantially larger for the strong prior than the weak prior. No priors with a mean outside the search space were allowed, such priors were simply replaced. For Branin, we only considered one of the three Branin optima for this procedure, since not all included frameworks support multi-modal distributions. For the wrong prior, we construct it similarly to the strong prior, but around the empirical maximum, x∗̄, of the objective function in the search space. Since this point was far away from the optimum for all benchmarks, we did not add additional noise. So, the wrong prior πm is constructed as πm(x) ∼ N (x∗̄, σs), (11) which means that the wrong prior is identical across runs for a given benchmark. E PROOFS Here, we provide the complete proofs for the Theorem and Corollary introduced in 3.3. In addition, we provide insight into the interplay between β, the prior π, and the value of the derived bound Cπ,n. Theorem 1. Given Dn, K`, π, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (12) Proof. To bound the performance of EIπ to that of EI, we primarily need to consider Lemma 7 and Lemma 8 by Bull Bull (2011). In Lemma 7, it is stated that for any sequence of points {xi}ni=1, dimensionality d, kernel length scales `, and p ∈ N, the posterior variance s2n on xn+1 will, for a large value C, satisfy the following inequality at most p times, sn(xn+1; `) ≥ Cp−(ν∧1)/d(log p)γ , γ = { α, ν ≤ 1 0, ν > 1 . (13) Thus, we can bound the posterior variance by assuming a point in time np where Eq. 13 has held p times. We now consider Lemma 8 where, through a number of inequalities, EI is bounded by the actual improvement In max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ In + (R+ σ)s, (14) where In = (f(x∗n)−f(x))+, τ(z) = zΦ(z) +φ(z) and s = sn(xn; `). Since πBO re-weights EIn by πn, these bounds need adjustment to hold for EIπ,n. For the upper bound provided in Lemma 8, we make use of maxx∈X πn(x) to bound EIπ,n(x) for any point x ∈ X : EIπ,n(x) maxx∈X πn(x) = EIn(x)πn(x) maxx∈X πn(x) ≤ EIn(x) ≤ In + (R+ σ)s. (15) For the lower bounds, we instead rely on minx∈X πn(x) in a similar manner: max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ EIn(x)πn(x) minx∈X πn(x) = EIπ,n(x) minx∈X πn(x) . (16) Consequently, EIπ can be bounded by the actual improvement as min x∈X πn(x) max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIπ,n(x) ≤ max x∈X πn(x)(In + (R+ σ)s). (17) With these bounds in place, we consider the setting as in the proof for Theorem 2 in Bull Bull (2011), which proves an upper bound for the EI strategy in the fixed kernel parameters setting. At an iteration np, p ≤ np ≤ 3p, the posterior variance will be bounded by Cp−(ν∧1)/d(log p)γ . Furthermore, since In ≥ 0 and ||f ||H`(X )) ≤ R, we can bound the total improvement as∑ i Ii ≤ ∑ i f(x∗i )− f(x∗i+1) ≤ f(x∗1)−min f ≤ 2||f ||∞ ≤ 2R, (18) leaving us a maximum of p times that In ≥ 2Rp−1. Consequently, both the posterior variance s2np and the improvement Inp are bounded at np. For a future iteration n, 3p ≤ n ≤ 3(p+ 1), we use the bounds on EIπ , snp and Inp to obtain the bounds on the EIπ loss: Ln(EIπ,Dn,H`(X ), R) = f(x∗n)−min f ≤ f(x∗np)−min f ≤ EIπ,np(x ∗) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ EIπ,np(xn+1) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ maxx∈X πn(x) minx∈X πn(x) τ(R/σ) τ(−R/σ) ( Inp + (R+ σ)snp ) ≤ ( maxx∈X π(x) minx∈X π(x) )β/n τ(R/σ) τ(−R/σ) (2Rp−1 + (R+ σ)Cp−(ν∧1)/d(log p)γ), where the last inequality is a factor Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n larger than the bound on Ln(EI,Dn,H`(X ), R). Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (19) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Proof. We simply compute the fraction of the losses in the limit, lim n→∞ Ln(EIπ,Dn,H`(X ), R) Ln(EI,Dn,H`(X ), R) ≤ lim n→∞ ( maxx∈X π(x) minx∈X π(x) )β/n = 1. (20) E.1 SENSITIVITY ANALYSIS ON Cπ,n We now provide additional insight into how Cπ,n depends on the choices of prior and β made by the user. To do so, we consider a typical low-budget setting and display values of Cπ,n at iteration 50. We consider a one-dimensional search space where with a Gaussian prior located in the center of the search space. In the plot below, we display how the choice of σ, given as a percentage of the search space, and β, the prior confidence parameter, yield different values of Cπ,n. We see that, for approximately half of the space, the upper bound on the loss is at least 80% (bright green or yellow) of the upper bound of EI, and only a small region of very narrow priors (dark blue) give a low guaranteed convergence rate. F EXPERIMENT DETAILS F.1 FRAMEWORKS Our implementations of πBO require little change in the supporting frameworks, Spearmint and HyperMapper, and we stay as close to the default settings as possible for each framework. For both Spearmint and HyperMapper, we consider a Matérn 5/2 Kernel. For particularly strong priors, rounding errors can cause the prior to be zero in parts of the search space, potentially affecting πBO’s convergence properties. To avoid these rounding errors and ensure a strictly positive prior, we add a small constant, = 10−12, to the prior throughout the search space for all prior qualities. For the initial sampling from the prior, we truncate the distribution by disallowing sampled points from outside the search space, instead re-sampling such points. During optimization, we do not to explicitly truncate the prior, as points outside the search space are never considered during acquisition function maximization. Thus, the prior is effectively truncated to fit the search space without requiring additional consideration. To the best of our knowledge, there is no publicly available implementation of BOWS, so we reimplemented it in Spearmint. For the Spearmint implementation of BOWS, we provide warped versions of each benchmark, obtaining 20 unique warpings per prior quality and benchmark. We truncate the prior by restricting the warped search space to only include the region which maps back to the original search space through the inverted warping function. For all other approaches, we use the original, publicly available implementations. Notably, the available implementation of Hyperopt TPE does not support bounded search spaces under our priors; as a compromise, when asked to evaluate outside the search space we return an empirically obtained maximum on the objective function inside the search space. We use the search spaces, prior locations and descriptions used by (Souza et al., 2021) for the toy and surrogate HPO problems. We now provide additional details about the benchmarks and case study tasks used, their associated search spaces and priors, and the resources used to run these studies. F.2 BENCHMARKS AND CASE STUDIES Branin The Branin function is a well-known synthetic benchmark for optimization problems. The Branin function has two input dimensions and three global minima. Hartmann-6 The Hartmann-6 function is a well-known synthetic benchmark for optimization problems, which has one global optimum and six dimensions. SVM A hyperparameter-optimization benchmark in 2D based on Profet (Klein et al., 2019). This benchmark is generated by a generative meta-model built using a set of SVM classification models trained on 16 OpenML tasks. The benchmark has two input parameters, corresponding to SVM hyperparameters. FCNet A hyperparameter and architecture optimization benchmark in 6D based on Profet. The FC-Net benchmark is generated by a generative meta-model built using a set of feed-forward neural networks trained on the same 16 OpenML tasks as the SVM benchmark. The benchmark has six input parameters corresponding to network hyperparameters. XGBoost A hyperparameter-optimization benchmark in 8D based on Profet. The XGBoost benchmark is generated by a generative meta-model built using a set of XGBoost regression models in 11 UCI datasets. The benchmark has eight input parameters, corresponding to XGBoost hyperparameters. OpenML MLP The OpenML MLP tuning tasks are provided through HPOBenchEggensperger et al. (2021), and train binary classifiers on real-world datasets. The 5D parameter space consists of four continous parameters and one integer parameter. U-Net Medical The U-Net (Ronneberger et al., 2015) is a popular convolutional neural network architecture for image segmentation. We use the implementation and evaluation setting from the popular NVIDIA deep learning examples repository (Przemek et al.) to build a case study for optimizing hyperparameters for U-Net. The NVIDIA repository is aimed towards the segmentation of neuronal processes in electron microscopy images for the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010). We optimize 6 hyperparameters of the U-Net pipeline. ImageNette ImageNette (Howard, 2019) is a subset of 10 classes of ImageNet (Deng et al., 2009) and is primarily used for algorithm development for the popular FastAI library (Howard et al., 2018). The FastAI library contains a convolutional neural network pipeline for ImageNette, that is used by all competitors on the ImageNette leaderboard. We base our case study on the 80 epoch, 128 resolution setting of this leaderboard and optimize 6 of the hyperparameters of the FastAI ImageNette pipeline. F.3 SEARCH SPACES AND PRIORS The search spaces for each benchmark are summarized in Table 1 (Branin and Profet), Table 2 (OpenML MLP), and Table 3 (ImageNette and U-Net). For the Profet benchmarks, we report the original ranges and whether or not a log scale was used. However, in practice, Profet’s generative model transforms the range of all hyperparameters to a linear [0, 1] range. We use Emukit’s public implementation for these benchmarks (Paleyes et al., 2019). F.4 CASE STUDY DETAILS Training details deep learning case studies Both case studies are based on existing deep learning code, whose hyperparameters we vary according to the HPO. In both case studies, we enabled mixed precision training, and for ImageNette-128 to work in conjunction with Spearmint, we had to enable the MKL_SERVICE_FORCE_INTEL environment flag. For all further details, we refer to the supplementary material containing our code. Resources used for deep learning case studies For U-Net Medical we used one GeForce RTX 2080 Ti GPU, whereas for ImageNette-128 we used two GeForce RTX 2080 Ti GPU’s. Also, we used 4 cores and 8 cores respectively, of an AMD EPYC 7502 32-Core Processor. In Table 4 we list the GPU hours needed for running the deep learning case studies as well as the emitted CO2 equivalents. Assets deep learning case studies In addition to the assets we list in the main paper, the U-Net Medical code base we used employs the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010), which is available for for the purpose of generating or testing non-commercial image segmentation software. We include licenses of all existing code assets we used in the supplementary material containing our code. G SENSITIVITY TO PRIOR STRENGTH We investigate the performance of πBO when providing priors over the optimum of various qualities. To show the effect of decreasing the prior strength, a grid of prior qualities, with varying widths and offsets from the optimum, are provided. Thus, priors range from the strong prior in the results, to weak, correct priors and sharp, misplaced priors. From Figures 14- 18, it is shown that πBO provides substantial performance across most prior qualities for all benchmarks but Branin, and recoups its early losses on the worst priors in the bottom left corner. πBO demonstrates sensitivity to the width of the prior, as the optimization does not progress as quickly for well-located priors with a larger width. Additionally, πBO’s improvement over the Spearmint + Mode baseline is further emphasized, as this baseline often fails to meaningfully improve over the mode in early iterations.
1. What is the focus of the paper, and what contribution does it make to the field of Bayesian optimization? 2. What are the strengths of the proposed approach, particularly in its simplicity and experimental results? 3. Are there any weaknesses or limitations in the method, such as the choice of prior distribution? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any suggestions for further improvements or connections to other works in the field?
Summary Of The Paper Review
Summary Of The Paper Summary: PiBO is a very straightforward paper that seeks to incorporate prior knowledge about the optimum to accelerate Bayesian optimization. They achieve this by simply multiplying the acquisition function (Expected Improvement, in the paper) by the prior distribution, and then maximizing that. They then decay the prior in order to deal with mis-specified priors. The authors also provide some theory that this weighted acquisition function performs (for some particular definition of performance) no worse than EI times a constant that depends on the decay rate. It’s a reasonable idea; honestly I’m surprised no paper has tried this before. Synthetic and real-world experiments indicate that: -With a well-specified prior, PiBO demonstrates superior performance over it’s basic EI counterpart. -With a poorly-specified prior, PiBO performs poorly worse than it’s EI counterpart but is usually able to recover given enough iterations. Review Review: Strengths: The method is simple and sensible, and experiments convincingly demonstrate that it works with a well-specified prior, and more importantly, demonstrate that a poor prior could lead to problems. I know it doesn’t have a lot of moving parts or complicated mathematical formulations that the BO community tends to favor these days, but I am in favor of acceptance for the reasons I mentioned above. It’s a rigorous scientific work that could have a significant impact on the way BO is performed in the HPO community today. Weaknesses: My only criticism is that priors used (on continuous params) are Gaussian. The authors may be leaving performance on the table by using such a simple prior. But this does not affect my score that much. Additional Notes: The decay method you have was used in this workshop paper Cost-aware Bayesian optimization, by Lee et al., 2020 (in a slightly different context, to decrease the impact of a cost model), so you might consider citing that if you want to reinforce that this decaying type modification has precedence in the literature. There is a stronger connection here with metalearning, in the context of HPO, (which seeks to build and then exploit priors) as a whole, and perhaps this could be included as a short paragraph in the related work section w/ some additional citations (e.g., more of Hutter’s work).
ICLR
Title $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization Abstract Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose πBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, πBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when πBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that πBO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that πBO improves on the state-of-theart performance for a popular deep learning task, with a 12.5× time-to-accuracy speedup over prominent BO approaches. 1 INTRODUCTION The optimization of expensive black-box functions is a prominent task, arising across a wide range of applications. Bayesian optimization (BO) is a sample-efficient approach to cope with this task, and has been successfully applied to various problem settings, including hyperparameter optimization (HPO) (Snoek et al., 2012), neural architecture search (NAS) (Ru et al., 2021), joint NAS and HPO (Zimmer et al., 2021), algorithm configuration (Hutter et al., 2011), hardware design (Nardi et al., 2019), robotics (Calandra et al., 2014), and the game of Go (Chen et al., 2018). Despite the demonstrated effectiveness of BO for HPO (Bergstra et al., 2011; Turner et al., 2021), its adoption among practitioners remains limited. In a survey covering NeurIPS 2019 and ICLR 2020 (Bouthillier & Varoquaux, 2020), manual search was shown to be the most prevalent tuning method, with BO accounting for less than 7% of all tuning efforts. As the understanding of hyperparameter settings in deep learning (DL) models increase (Smith, 2018), so too does the tuning proficiency of practitioners (Anand et al., 2020). As previously displayed (Smith, 2018; Anand et al., 2020; Souza et al., 2021; Wang et al., 2019), this knowledge manifests in choosing single configurations or regions of hyperparameters that presumably yield good results, demonstrating a belief over the location of the optimum. BO’s deficit to properly incorporate said beliefs is a reason why practitioners prefer manual search to BO (Wang et al., 2019), despite its documented shortcomings (Bergstra & Bengio, 2012). To improve the usefulness of automated HPO approaches for ML practictioners, the ability to incorporate such knowledge is pivotal. Well-established BO frameworks (Snoek et al., 2012; Hutter et al., 2011; The GPyOpt authors, 2016; Kandasamy et al., 2020; Balandat et al., 2020) support user input to a limited extent, such as by biasing the initial design, or by narrowing the search space; however, this type of hard prior can lead to poor performance by missing important regions. BO also supports a prior over functions p(f) via the Gaussian Process kernel. However, this option for injecting knowledge is not aligned with the knowledge that experts possess: they often know which ranges of hyperparameter values tend to work best (Perrone et al., 2019; Smith, 2018; Wang et al., 2019), and are able to specify a probability distribution to quantify these priors. For example, many users of the Adam optimizer (Kingma & Ba, 2015) know that its best learning rate is often in the vicinity of 1× 10−3. In practice, DL experiments are typically conducted in a low-budget setting of less than 50 full model trainings (Bouthillier & Varoquaux, 2020). As such, practitioners want to exploit their knowledge efficiently without wasting early model trainings on configurations they expect to likely perform poorly. Unfortunately, this suits standard BO poorly, as BO requires a moderate number of function evaluations to learn about the response surface and make informed decisions that outperform random search. While there is a demand to increase knowledge injection possibilities to further the adoption of BO, the concept of encoding prior beliefs over the location of an optimum is still rather novel: while there are some initial works (Ramachandran et al., 2020; Li et al., 2020; Souza et al., 2021), no approach exists so far that allows the integration of arbitrary priors and offers flexibility in the choice of acquisition function; theory is also lacking. We close this gap by introducing a novel, remarkably simple, approach for injecting arbitrary prior beliefs into BO that is easy to implement, agnostic to the surrogate model used and converges at standard BO rates for any choice of prior. Our contributions After discussing our problem setting, related work, and background (Section 2), we make the following contributions: 1. We introduce πBO, a novel generalization of myopic acquisition functions that accounts for user-specified prior distributions over possible optima, is demonstrably simple-to-implement, and can be easily combined with arbitrary surrogate models (Section 3.1 & 3.2); 2. We formally prove that πBO inherits the theoretical properties of the well-established Expected Improvement acquisition function (Section 3.3); 3. We demonstrate on a broad range of established benchmarks and in DL case studies that πBO can yield 12.5× time-to-accuracy speedup over vanilla BO (Section 4). 2 BACKGROUND AND RELATED WORK 2.1 BLACK-BOX OPTIMIZATION We consider the problem of optimizing a black-box function f across a set of feasible inputs X ⊂ Rd: x∗ ∈ arg min x∈X f(x). (1) We assume that f(x) is expensive to evaluate, and can potentially only be observed through a noisy estimate, y. In this setting, we wish to minimize f in an efficient manner, typically adhering to a budget which sets a cap on the number of points that can be evaluated. Black-Box Optimization with Probabilistic User Beliefs In our work, we consider an augmented version of the optimization problem in Eq. (1), where we have access to user beliefs in the form of a probability distribution on the location of the optimum. Formally, we define the problem of black-box optimization with probabilistic user beliefs as solving Eq. (1), given a user-specified prior probability on the location of the optimum defined as π(x) = P ( f(x) = min x′∈X f(x′) ) , (2) where regions that the user expects to likely to contain an optimum will have a high value. We note that, without loss of generality, we require π to be strictly positive on all of X , i.e., any point in the search space might be an optimum. Since the user belief π(x) can be inaccurate or even misleading, optimizing Eq. (1) given (2) is a challenging problem. 2.2 BAYESIAN OPTIMIZATION We outline Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010; Shahriari et al., 2016b). Model BO aims to globally minimize f by an initial experimental design D0 = {(xi, yi)}Mi=1 and thereafter sequentially deciding on new points xn to form the data Dn = Dn−1 ∪ {(xn, yn)} for the n-th iteration with n ∈ {1 . . . N}. After each new observation, BO constructs a probabilistic surrogate model of f and uses that surrogate to evaluate an acquisition function α(x,Dn). The combination of surrogate model and acquisition function encodes the policy for selecting the next point xn+1. When constructing the surrogate, the most common choice is Gaussian processes (Rasmussen & Williams, 2006), which model f as p(f |Dn) = GP(m, k), with prior mean m (which is typically 0) and positive semi-definite covariance kernel k. The posterior mean mn and the variance s2n are mn(x) = kn(x) >(Kn + σ 2 nI)y, s 2 n(x) = k(x,x)− kn(x)>(Kn + σ2nI)kn(x), (3) where (Kn)ij = k(xi,xj), kn(x) = [k(x,x1), . . . , k(x,xn)]> and σ2n is the estimation of the observation noise variance σ2. Alternative surrogate models include Random forests (Hutter et al., 2011) and Bayesian neural networks (Springenberg et al., 2016). Acquisition Functions To obtain new candidates to evaluate, BO employs a criterion, called an acquisition function, that encapsulates an explore-exploit trade-off. By maximizing this criterion at each iteration, one or more candidate point are obtained and added to observed data. Several acquisition functions are used in BO; the most common of these is Expected Improvement (EI) (Jones et al., 1998). For a noiseless function, EI selects the next point xn+1, where f∗n is the minimal objective function value observed by iteration n, as xn+1 ∈ arg max x∈X E [ [(f∗n − f(x)]+ ] = arg max x∈X Zsn(x)Φ(Z) + sn(x)φ(Z), (4) where Z = (f∗n −mn(x))/sn(x). Thus, EI provides a myopic strategy for determining promising points; it also comes with convergence guarantees (Bull, 2011). Similar myopic acquisition functions are Upper Confidence Bound (UCB) (Srinivas et al., 2012), Probability of Improvement (PI) (Jones, 2001; Kushner, 1964) and Thompson Sampling (TS) (Thompson, 1933). A different class of acquisition functions is based on non-myopic criteria, such as Entropy Search (Hennig & Schuler, 2012), Predictive Entropy Search (Hernández-Lobato et al., 2014) and Max-value Entropy Search (Wang & Jegelka, 2017), which select points to minimize the uncertainty about the optimum, and the Knowledge Gradient (Frazier et al., 2008), which aims to minimize the posterior mean of the surrogate at the subsequent iteration. Our work applies to all acquisition functions in the first class, and we leave its extension to those in the second class for future work. 2.3 RELATED WORK There are two main categories of approaches that exploit prior knowledge in BO: approaches that use records of previous experiments, and approaches that incorporate assumptions on the black-box function provided either directly or indirectly by the user. As πBO exploits prior knowledge from users, we briefly discuss approaches which utilize previous experiments, and then comprehensively discuss the literature on exploiting expert knowledge. Learning from Previous Experiments Transfer learning for BO aims to automatically extract and use knowledge from prior executions of BO. These executions can come, for example, from learning and optimizing the hyperparameters of a machine learning algorithm on different datasets (van Rijn & Hutter, 2018; Swersky et al., 2013; Wistuba et al., 2015; Perrone et al., 2019; Feurer et al., 2015; 2018), or from optimizing the hyperparameters at different development stages (Stoll et al., 2020). For a comprehensive overview of meta learning for hyperparameter optimization, please see the survey from Vanschoren (2018). In contrast to these transfer learning approaches, πBO and the related work discussed below does not hinge on the existence of previous experiments, and can therefore be applied more generally. Incorporating Expert Priors over Function Structure BO can leverage structural priors on how the objective function is expected to behave. Traditionally, this is done via the surrogate model’s prior over functions, e.g., the kernel of the GP. However, there are lines of work that explore additional structural priors for BO to leverage. For instance, both SMAC (Hutter et al., 2011) and iRace (LópezIbáñez et al., 2016) support structural priors in the form of log-transformations, Li et al. (2018) propose to use knowledge about the monotonicity of the objective function as a prior for BO, and Snoek et al. (2014) model non-stationary covariance between inputs by warping said inputs. Oh et al. (2018) and Siivola et al. (2018) both propose structural priors tailored to high-dimensional problems, addressing the issue of over-exploring the boundary described by Swersky (2017). Oh et al. (2018) propose a cylindrical kernel that expands the center of the search space and shrinks the edges, while Siivola et al. (2018) propose adding derivative signs to the edges of the search space to steer BO towards the center. Lastly, Shahriari et al. (2016a) propose a BO algorithm for unbounded search spaces which uses a regularizer to penalize points based on their distance to the center of the user-defined search space. All of these approaches incorporate prior information on specific properties of the function or search space, and are thus not always applicable. Moreover, they do not generally direct the search to desired regions of the search space, offering the user little control over the selection of points to evaluate. Incorporating Expert Priors over Function Optimum Few previous works have proposed to inject explicit prior distributions over the location of an optimum into BO. In these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. Bergstra et al. (2011) suggest an approach that supports prior beliefs from a fixed set of distributions. However, this approach cannot be combined with standard acquisition functions. BOPrO (Souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. From the pseudo-posterior, configurations are selected using the EI acquisition function, using the formulation in Bergstra et al. (2011). While BOPrO is able to recover from misleading priors, its design restricts it to only use EI. Moreover, it does not provide the convergence guarantees of πBO. Li et al. (2020) propose to infer a posterior conditioned on both the observed data and the user prior through repeated Thompson sampling and maximization under the prior. This method displays robustness against misleading priors but lacks in empirical performance. Additionally, it is restricted to only one specific acquisition function. Ramachandran et al. (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. While the approach is model- and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors. In Section 4, we demonstrate that πBO compares favorably for priors over the function optimum, and shows improved empirical performance. Additionally, we do a complete comparison of all approaches in Appendix C. In summary, πBO sets itself apart from the methods above by being simpler (and thus easier to implement in different frameworks), flexible with regard to different acquisition functions and different surrogate models, the availability of theoretical guarantees, and, as we demonstrate in Section 4, better empirical results. 3 METHODOLOGY We now present πBO, which allows users to specify their belief about the location of the optimum through any probability distribution. A conceptually simple approach, πBO can be easily implemented in existing BO frameworks and can be combined directly with the myopic acquisition functions listed above. πBO augments an acquisition function to emphasize promising regions under the prior, ensuring such regions are to be explored frequently. As optimization progresses, the πBO strategy increasingly resembles that of vanilla BO, retaining its standard convergence rates (see Section 3.3). πBO is publicly available as part of the SMAC (https://github.com/automl/SMAC3) and HyperMapper (https://github.com/luinardi/hypermapper) HPO frameworks. 3.1 PRIOR-WEIGHTED ACQUISITION FUNCTION In πBO, we consider π(x) in Eq. (2) to be a weighting scheme on points in X . The heuristic provided by an acquisition function α(x,Dn), such as EI in Eq. (2.2), can then be combined with said weighting scheme to form a prior-weighted version of the acquisition function. The resulting strategy then becomes: xn ∈ arg max x∈X α(x,Dn)π(x). (5) This emphasizes good points under π(x) throughout the optimization. While this property is suitable for well-located priors π, it risks incurring a substantial slowdown for poorly-chosen priors; we will now show how to counter this by decaying the prior over time. 3.2 DECAYING PRIOR-WEIGHTED ACQUISITION FUNCTION As the optimization progresses, we should increasingly trust the surrogate model over the prior; the model improves with data while the user prior remains fixed. This cannot be achieved with the formulation in Eq. (5), as poorly-chosen priors would permanently slow down the optimization. Rather, to accomplish this desired behaviour, the influence of the prior needs to decay over time. Building on the approaches of Lee et al. (2020) and Souza et al. (2021), we accomplish this by raising the prior to a power γn ∈ R+, which decays towards zero with growing n. Thus, the resulting prior πn(x) = π(x)γn reflects a belief on the location of an optimum that gets weaker with time, converging towards a uniform distribution. We set γn = β/n, where β ∈ R+ is a hyperparameter set by the user, reflecting their confidence in π(x). We provide a sensitivity study on β in Appendix A. For a given acquisition function α(x,Dn) and user-specified prior π(x), we define the decaying prior-weighted acquisition function at iteration n as απ,n(x,Dn) ∆ = α(x,Dn)πn(x) ∆ = α(x,Dn)π(x)β/n (6) and its accompanying strategy as the maximizer of απ,n. With the acquisition function in Eq. (6), the prior will assume large importance initially, promoting the selection of points close to the prior mode. With time, the exponent on the prior will tend to zero, making the prior tend to uniform. Thus, with increasing n, the point selection of απ,n becomes increasingly similar to that of α. Algorithm 1 displays the simplicity of the new strategy, highlighting the required one-line change (Line 6) in the main BO loop. In Line 3, the mode of the prior is used as a first initial sample if available. Otherwise, only sampling is used for initialization. Algorithm 1 πBO Algorithm 1: Input: Input space X , prior distribution over optimum π(x), prior confidence parameter β, size M of the initial design, max number of optimization iterations N . 2: Output: Optimized design x∗. 3: {xi}Mi=1 ∼ π(x), {yi ← f(xi) + i}Mi=1, i ∼ N(0, σ2) 4: D0 ← {(xi, yi)}Mi=1 5: for {n = 1, 2, . . . , N} do 6: xnew ← arg maxx∈X α(x,Dn−1)π(x)β/n 7: ynew ← f(xnew) + i 8: Dn ← Dn−1 ∪ {(xnew, ynew)} 9: end for 10: return x∗ ← arg min(xi,yi)∈DN yi To illustrate the behaviour of πBO, we consider a toy problem with Gaussian priors on three different locations of the 1D space (center, left and right) as displayed in Figure 1. We define a 1D-Log-Branin toy problem by setting the second dimension of the 2D Branin function to the global optimum x2 = 2.275 and optimizing for the first dimension. Initially (iteration 4 in the top row), πBO amplifies the acquisition function α in high-probability regions, putting a lot of trust in the prior. As the prior decays (iteration 6 and 8 in the middle and bottom rows, respectively), the influence of the prior on the point selection decreases. By later iterations, πBO has searched substantially around the prior mode, and moves gradually towards other parts of the search space. This is of particular importance for the scenarios in the right column, where πBO recovers from a misleading prior. In Appendix B, we show that πBO is applicable to different surrogate models and acquisition functions. 3.3 THEORETICAL ANALYSIS We now study the πBO method from a theoretical standpoint when paired with the EI acquisition function. For the full proof, we refer the reader to Appendix E. To provide convergence rates, we rely on the set of assumptions introduced by Bull (2011). These assumptions are satisfied for popular kernels like the Matérn (1960) class and the Gaussian kernel, which is obtained in the limit ν →∞, where the rate ν controls the smoothness of functions from the GP prior. Our theoretical results apply when both length scales ` and the global scale of variation σ are fixed; these results can then be extended to the case where the kernel hyperparameters are learned using Maximum Likelihood Estimation (MLE) following the same procedure as in Bull (2011) (Theorem 5). We define the loss over the ball BR for a function f of norm ||f ||H`(X ) ≤ R in the reproducing kernel Hilbert space (RKHS)H`(X ) given a symmetric positive-definite kernel K` as Ln(u,Dn,H`(X ), R) ∆ = sup ||f ||H`(X)≤R Euf [f(x∗n)−min f ], (7) where n is the optimization iteration and u a strategy. We focus on the strategy that maximizes EIπ , the prior-weighted EI, and show that the loss in Equation (7) can, at any iteration n, be bounded by the vanilla EI loss function. We refer to EIπ,n and EIn when we want to emphasize the iteration n for the acquisition functions EIπ and EI, respectively. Theorem 1. Given Dn, K`, π, β, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (8) Using Theorem 1, we obtain the convergence rate of EIπ . This trivially follows when considering the fraction of the losses in the limit and inserting the original convergence rate on EI as in Bull (2011): Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (9) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Thus, we determine that the weighting introduced by EIπ does not negatively impact the worst-case convergence rate. The short-term performance is controlled by the user in their choice of π(x) and β. This result is coherent with intuition, as a weaker prior or quicker decay will yield a short-term performance closer to that of EI. In contrast, a stronger prior or slower decay does not guarantee the same short-term performance, but can produce better empirical results, as shown in Section 4. 4 RESULTS We empirically demonstrate the efficiency of πBO in three different settings. As πBO is a general method to augment acquisition functions, it can be implemented in different parent BO packages, and the implementation in any given package inherits the pros and cons of that package. To minimize confounding factors concerning this choice of parent package, we keep comparisons within the methods in one package where possible and provide results in the other packages in Appendix C. In Sec. 4.2, using Spearmint as a parent package, we evaluate πBO against three intuitive baselines to assess its performance and robustness on priors with different qualities, ranging from very accurate to purposefully detrimental. To this end, we use toy functions and cheap surrogates, where priors of known quality can be obtained. Next, in Sec. 4.3, we compare πBO against two competitive approaches (BOPrO and BOWS) that integrate priors over the optimum similarly to πBO, using HyperMapper (Nardi et al., 2019) as a parent framework, in which the most competitive baseline BOPrO is implemented. For these experiments we adopt a Multilayer Perceptron (MLP) benchmark on various datasets, using the interface provided by HPOBench (Eggensperger et al., 2021), with priors constructed around the defaults provided by the library. Lastly, in Sec. 4.4, we apply πBO and other approaches to two deep learning tasks, also using priors derived from publicly available defaults. Further, we demonstrate the flexibility of πBO in Appendix B, where we evaluate πBO in SMAC (Hutter et al., 2011; Lindauer et al., 2021) with random forests, as another framework with another surrogate model, and adapt it to use the UCB, TS and PI acquisition functions instead of EI. 4.1 EXPERIMENTAL SETUP Priors For our surrogate and toy function tasks, we follow the prior construction methodology in BOPrO (Souza et al., 2021) and create three main types of prior qualities, all Gaussian: strong, weak and wrong. The strong and weak priors are located to have a high and moderate density on the optimum, respectively. The wrong prior is a narrow distribution located in the worst region of the search space. For the OpenML MLP tuning benchmark, we utilize the defaults and search spaces provided in HPOBench (Eggensperger et al., 2021), and construct Gaussian priors for each hyperparameter with their mean on the default value, and a standard deviation of 25% of the hyperparameter’s domain. For the DL case studies, we utilize defaults from each task’s repository and, for numerical hyperparameters, once again set the standard deviation to 25% of the hyperparameter’s domain. For categorical hyperparameters, we place a higher probability on the default. As such, the quality of the prior is ultimately unknown, but serves as a proxy for what a practitioner may choose and has shown to be a reasonable choice (Anastacio & Hoos, 2020). For all experiments, we run πBO with β = N/10, where N is the total number of iterations, in order to make the prior influence approximately equal in all experiments, regardless of the number of allowed BO iterations. We investigate the sensitivity to β in Appendix A, and the sensitivity to prior quality in Appendix G. Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al., 2021) and BO in Warped Space (BOWS) (Ramachandran et al., 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design. The latter is initialized with the mode of the prior in addition to its regular initial design. In our main results, we choose Spearmint (with EI) (Snoek et al., 2012) for this mode-initialized baseline, simply referring to it as Spearmint. See Appendix F for complete details on the experiments. 4.2 ROBUSTNESS OF πBO First, we study the robustness of πBO. To this end, we show that πBO benefits from informative priors and can recover from wrong priors, being consistent with our theoretical results in Section 3.3. To this end, we consider a well-known black-box optimization function, Branin (2D), as well as two surrogate HPO tasks from the Profet suite (Klein et al., 2019): FC-Net (6D) and XGBoost (8D). For these tasks, we exemplarily show results for πBO implemented in the Spearmint framework. As Figure 2 shows, πBO is able to quickly improve over sampling from the prior. Moreover, it improves substantially over Spearmint (with mode initialization) for all informative priors, staying up to an order of magnitude ahead throughout the optimization for both strong and weak priors. For wrong priors, πBO displays desired robustness by recovering to approximately equal regret as Spearmint. In contrast, Spearmint frequently fails to substantially improve from its initial design on the strong and weak prior, which demonstrates the importance of considering the prior throughout the optimization procedure. This effect is even more pronounced on the higher-dimensional tasks FCNet and XGBoost, where BO typically spends many iterations at the boundary (Swersky, 2017). Here, πBO rapidly improves multiple orders of magnitude over the initial design, displaying its ability to efficiently exploit the information provided by the prior. 4.3 COMPARISON OF πBO AGAINST OTHER PRIOR-GUIDED APPROACHES Next, we study the performance of πBO against other state-of-the-art prior-guided approaches. To this end, we consider optimizing 5 hyperparameters of an MLP for classification (Eggensperger et al., 2021) on 6 different OpenML datasets (Vanschoren et al., 2014) and compare against BOPrO (Souza et al., 2021) and BOWS (Ramachandran et al., 2020). For minimizing confounding factors, we implement πBO and BOWS in HyperMapper (Nardi et al., 2019), the same framework that BOPrO runs on. Moreover, we let all approaches share πBO’s initialization procedure. We consider a budget of 50 iterations as it is common with ML practitioners (Bouthillier & Varoquaux, 2020). In Figure 3, we see that πBO offers the best performance on four out of six tasks, and displays the most consistent performance across tasks. In contrast to them BOWS and BOPrO, πBO also comes with theoretical guarantees and is flexible in the choice of framework and acquisition function. 4.4 CASE STUDIES ON DEEP LEARNING PIPELINES Last, we study the impact of πBO on deep learning applications, which are often fairly expensive, making efficiency even more important than in HPO for traditional machine learning. To this end, we consider two deep learning case studies: segmentation of neuronal processes in electron microscopy images with a U-Net(6D) (Ronneberger et al., 2015), with code provided from the NVIDIA deep learning examples repository (Przemek et al.), and image classification on ImageNette-128 (6D) (Howard, 2019), a light-weight adaptation of ImageNet (Deng et al., 2009), with code from the repository of the popular FastAI library (Howard et al., 2018). We mimic the setup from Section 4.3 by using the HyperMapper framework and identical initialization procedures across approaches. Gaussian priors are set on publicly available default values, which are results of previous tuning efforts of the original authors. We again optimize for a practical budget of 50 iterations (Bouthillier & Varoquaux, 2020). As test splits for both tasks were not available to us, we report validation scores. As shown in Figure 4, πBO achieves a 2.5× time-to-accuracy speedup over Vanilla BO. For ImageNette, the performance of πBO at iteration 4 already surpasses the performance of Vanilla BO at Iteration 50, demonstrating a 12.5× time-to-accuracy speedup. Ultimately, πBO’s final performance establishes a new state-of-the-art validation performance on ImageNette with the provided pipeline, with a final accuracy of 94.14% (vs. the previous state of the art with 93.55%1). 5 CONCLUSION AND FUTURE WORK We presented πBO, a conceptually very simple Bayesian optimization approach for leveraging user beliefs about the location of an optimum, which relies on a generalization of myopic acquisition functions. πBO modifies the selection of design points through a decaying weighting scheme, promoting high-probability regions under the prior. Contrary to previous approaches, πBO imposes only minor restrictions on the type of priors, surrogates or frameworks that can be used. πBO provably converges at regular rates, displays state-of-the art performance across tasks, and effectively recovers from poorly specified priors. Moreover, we have demonstrated that πBO can yield substantial performance gains for practical low-budget settings, improving on the state-of-the-art for a real-world CNN tuning tasks even with trivial choices for the prior. For practitioners who have historically relied on manual or grid search for HPO, we hope that πBO will serve as an intuitive and effective tool for bridging the gap between traditional tuning methods and BO. πBO sets the stage for several follow-up studies. Amongst others, we will examine the extension of πBO to non-myopic acquisition functions, such as entropy-based methods. Non-myopic acquisition functions do not fit well in the current πBO framework, as they do not necessarily benefit from evaluating inputs expected to perform well. We will also combine πBO with multi-fidelity optimization methods to yield even higher speedups, and with multi-objective optimization to jointly optimize performance and secondary objective functions, such as interpretability or fairness of models. 1https://github.com/fastai/imagenette#imagenette-leaderboard, 80 Epochs, 128 Resolution 6 ETHICS STATEMENT Our work proposes an acquisition function generalization which incorporates prior beliefs about the location of the optimum into optimization. The approach is foundational and thus will not bring direct societal or ethical consequences. However, πBO will likely be used in the development of applications for a wide range of areas and thus indirectly contribute to their impacts on society. In particular, we envision that πBO will impact a multitude of fields by allowing ML experts to inject their knowledge about the location of the optimum into Bayesian Optimization. We also note that we intend for πBO to be a tool that allows users to assist Bayesian Optimization by providing reasonable prior knowledge and beliefs. This process induces user bias into the optimization, as πBO will inevitably start by optimizing around this prior. As some users may only be interested in optimizing in the direct neighborhood of their prior, πBO could allow them to do so if provided with a high β value in relation to the number of iterations. Thus, if improperly specified, πBO could serve to reinforce user’s beliefs by providing improved solutions only for the user’s region of interest. However, if used properly, πBO will reduce the computational resources required to find strong hyperparameter settings, contributing to the sustainability of machine learning. 7 REPRODUCIBILITY In order to make the experiments run in πBO as reproducible as possible, we have included links to repositories of our implementations in both Spearmint and HyperMapper, with instructions on how to run our experiments. Moreover, we have included in said repositories all of the exact priors that we have used for our runs, which run out of the box. The priors we used were, in our opinion, well motivated as to avoid subjectivity, which we hope serves as a good frame of reference for similar works in the future. Specifically, Appendix 4.4 describes how we ran our DL experiments, Appendix F.1 goes into the implementation in further detail, and Appendix D displays the exact priors for all our experiments and prior strengths. Our Spearmint implementation of both πBO and BOWS is available at https://github.com/piboauthors/PiBO-Spearmint, and our HyperMapper implementation is available at https://github.com/piboauthors/ PiBO-Hypermapper. For our results on the convergence of πBO, we have provided a complete proof in Appendix E. 8 ACKNOWLEDGEMENTS Luigi Nardi was supported in part by affiliate members and other supporters of the Stanford DAWN project — Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Carl Hvarfner and Luigi Nardi were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza was supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721, through TAILOR, a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and by the state of BadenWürttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG. Marius Lindauer acknowledges support by the European Research Council (ERC) under the Europe Horizon programme. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973. A BETA ABLATION STUDY We consider the effect of the β hyperparameter of πBO introduced in Section 3.2, controlling the speed of the prior decay. To show the effect of this hyperparameter, we display the performance of πBO for the toy and surrogate-based benchmarks across all prior qualities. We emphasize the trade-off between high-end performance on good priors and robustness to bad priors. In general, a higher value of β yields better performance for good priors, but makes πBO slower to recover from bad priors. This behaviour follows intuition and the results provided in Section 3.3. In Figure 5, we display how πBO performs for different choices of β, and once again provide sampling from the prior and Spearmint as baselines. Following the prior decay parameter baseline by (Souza et al., 2021), we show that the choice of β = 10 onsistently gives one of the best performances for strong priors, while retaining good overall robustness. Nearly all choices of β give a final performance better than that of Spearmint for good priors. Additionally, there is a clear relationship between final performance and β on all good priors. This is best visualized in the weak XGBoost experiment, where the final performances are distinctly sorted by increasing β. Similar patterns are not as apparent in the final performance on wrong priors. This behaviour highlights the benefits of slowly decaying the prior. Overall, πBO is competitive for a wide range of β, but suffers slightly worse final performance on good priors for low values of β. B πBO VERSATILITY We show the versatility of πBO by implementing it in numerous variants of SMAC Hutter et al. (2011), a well-established HPO framework which supports both GP and RF surrogates, and a majority of the myopic acquisition functions mentioned in Section 2. We showcase the performance of πBO-EI, πBO-PI, πBO-UCB and πBO-TS on the general formulation of πBO with a GP surrogate, as well as πBO-EI with an RF surrogate, which requires a minor adaptation. B.1 GENERAL FORMULATION OF πBO To allow for the universality of πBO across several acquisition function, we must consider the various magnitudes of acquisition functions. As UCB and TS typically output values in the same order of magnitude and sign as the objective function, we do not want the behaviour of πBO to be affected by such variations. The solution to the problem referenced above is to add a simple affine transformation to the observations, {yi}ni=1, by subtracting by the incumbent, y∗n. As such, we consider at each time step not the original dataset, Dn = {(xi, yi)}ni=1, but the augmented dataset D̂n = {(xi, yi − y∗n)}ni=1. With this formulation, we get the desired scale- and sign-invariance in the UCB and TS acquisition functions, without changing the original strategy of any of the acquisition function. Notably, this change leaves prior-weighted EI and PI unaffected. B.2 RANDOM FOREST SURROGATE We now demonstrate πBO with a RF surrogate model. In the SMAC implementation of the RF surrogate, the model forms piece-wise constant mean and covariance functions. Naturally, this leads to the EI, PI or UCB acquisition function surface being piece-wise constant as well. Consequently, an acquisition function with a RF surrogate will typically have a region of global optima. The choice of the next design point is then selected uniformly at random among the candidate optima. We wish to retain this randomness when applying πBO. As such, we require the prior to be piece-wise constant, too. To do so, we employ a binning approach, that linearly rounds prior values after applying the decay term. The granularity of the binning decreases at the same rate as the prior, allowing the piece-wise constant regions of the prior grow in size as optimization progresses. In Figure 9, we demonstrate the importance of the piece-wise constant acquisition function by showing the point selection when applying a πBO with a continuous prior to an RF surrogate (left) and when applying the binning approach (right). Notably, the smooth prior on the left repeatedly proposes design points very close to previous points, as the prior forces the selection of points near the boundary of a promising region. Thus, the surrogate model rarely improves, and the optimization gets stuck at said boundary for multiple iterations at a time. This is best visualized at iteration 5 and 10, where similar points have been selected for all iterations in the time span. With the binned prior on the right, the selection of design points occurs randomly within a region, avoiding the static point selection and updating of non-modified approach. In Figure 8, we report the performance of πBO with a RF surrogate and the binning approach. This approach is competitive, as it provides substantial improvement over SMAC, improves over sampling from the prior, and quickly recovers from misleading priors. Notably, the binning is not required for discrete parameters, as the piece-wise constant property holds by default. Thus, this adaptation is only necessary for continuous parameters. C OTHER PRIOR-BASED APPROACHES We now demonstrate the performance of πBO for five different functions and HPO Surrogates: Branin, Hartmann-6, as well as three tasks from the Profet suite - SVM, FCNet and XGBoost. We compare all frameworks for priors over the optimum - namely BOPrO Souza et al. (2021), BOWS Ramachandran et al. (2020), TPE Bergstra et al. (2011), PS-G Li et al. (2020). The performance of πBO is shown on two different frameworks - Spearmint and Hypermapper - to allow for fair comparison and display cross-framework consistency. As BOWS is implemented in Spearmint and BOPrO in Hypermapper, they appear in the plots retaining to their framework. We display each approach with vanilla Spearmint/Hypermapper, with normal initialization, as an additional baseline. Moreover, we display the performance of πBO implemented in Spearmint, as well as Mode + Spearmint, on the MLP tuning tasks. D PRIOR CONSTRUCTION We now present the method by which we construct our priors. For the synthetic benchmarks, we mimic (Souza et al., 2021) by offsetting a Gaussian distribution from the optima. For our case studies, we choose a Gaussian prior with zero correlation between dimensions. This was required in order to have a simple, streamlined approach that was compatible with all frameworks. We constructed the priors once before conducting the experiments, and kept them fixed throughout. Synthetic and Surrogate-based HPO Benchmarks For these benchmarks, the approximate optima of all included functions could be obtained in advance, either analytically or empirically through extensive sampling. Thus, the correctness of the prior is ultimately known in advance. For a function of dimensionality d with optimum at x∗, the strong and weak prior qualities were constructed by using a quality-specific noise term = { i}di=1 and quality-specific standard deviation as a fraction of the search space. For the strong prior πs(x), we use a small standard deviation σs = 1% and construct the prior as πs(x) ∼ N (x∗ + , σs), i ∼ N (0, σs). (10) We construct the weak priors analogously by using a larger standard deviation σw = 10%. For our 20 runs of the strong and weak prior, this procedure yielded us 20 unique priors per quality type, with varying offsets from the true optimum. Additionally, the density on the optimum is substantially larger for the strong prior than the weak prior. No priors with a mean outside the search space were allowed, such priors were simply replaced. For Branin, we only considered one of the three Branin optima for this procedure, since not all included frameworks support multi-modal distributions. For the wrong prior, we construct it similarly to the strong prior, but around the empirical maximum, x∗̄, of the objective function in the search space. Since this point was far away from the optimum for all benchmarks, we did not add additional noise. So, the wrong prior πm is constructed as πm(x) ∼ N (x∗̄, σs), (11) which means that the wrong prior is identical across runs for a given benchmark. E PROOFS Here, we provide the complete proofs for the Theorem and Corollary introduced in 3.3. In addition, we provide insight into the interplay between β, the prior π, and the value of the derived bound Cπ,n. Theorem 1. Given Dn, K`, π, σ, `, R and the compact set X ⊂ Rd as defined above, the loss Ln incurred at iteration n by EIπ,n can be bounded from above as Ln(EIπ,n,Dn,H`(X ), R) ≤ Cπ,nLn(EIn,Dn,H`(X ), R), Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n . (12) Proof. To bound the performance of EIπ to that of EI, we primarily need to consider Lemma 7 and Lemma 8 by Bull Bull (2011). In Lemma 7, it is stated that for any sequence of points {xi}ni=1, dimensionality d, kernel length scales `, and p ∈ N, the posterior variance s2n on xn+1 will, for a large value C, satisfy the following inequality at most p times, sn(xn+1; `) ≥ Cp−(ν∧1)/d(log p)γ , γ = { α, ν ≤ 1 0, ν > 1 . (13) Thus, we can bound the posterior variance by assuming a point in time np where Eq. 13 has held p times. We now consider Lemma 8 where, through a number of inequalities, EI is bounded by the actual improvement In max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ In + (R+ σ)s, (14) where In = (f(x∗n)−f(x))+, τ(z) = zΦ(z) +φ(z) and s = sn(xn; `). Since πBO re-weights EIn by πn, these bounds need adjustment to hold for EIπ,n. For the upper bound provided in Lemma 8, we make use of maxx∈X πn(x) to bound EIπ,n(x) for any point x ∈ X : EIπ,n(x) maxx∈X πn(x) = EIn(x)πn(x) maxx∈X πn(x) ≤ EIn(x) ≤ In + (R+ σ)s. (15) For the lower bounds, we instead rely on minx∈X πn(x) in a similar manner: max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIn(x) ≤ EIn(x)πn(x) minx∈X πn(x) = EIπ,n(x) minx∈X πn(x) . (16) Consequently, EIπ can be bounded by the actual improvement as min x∈X πn(x) max ( In −Rs, τ(−R/σ) τ(R/σ) In ) ≤ EIπ,n(x) ≤ max x∈X πn(x)(In + (R+ σ)s). (17) With these bounds in place, we consider the setting as in the proof for Theorem 2 in Bull Bull (2011), which proves an upper bound for the EI strategy in the fixed kernel parameters setting. At an iteration np, p ≤ np ≤ 3p, the posterior variance will be bounded by Cp−(ν∧1)/d(log p)γ . Furthermore, since In ≥ 0 and ||f ||H`(X )) ≤ R, we can bound the total improvement as∑ i Ii ≤ ∑ i f(x∗i )− f(x∗i+1) ≤ f(x∗1)−min f ≤ 2||f ||∞ ≤ 2R, (18) leaving us a maximum of p times that In ≥ 2Rp−1. Consequently, both the posterior variance s2np and the improvement Inp are bounded at np. For a future iteration n, 3p ≤ n ≤ 3(p+ 1), we use the bounds on EIπ , snp and Inp to obtain the bounds on the EIπ loss: Ln(EIπ,Dn,H`(X ), R) = f(x∗n)−min f ≤ f(x∗np)−min f ≤ EIπ,np(x ∗) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ EIπ,np(xn+1) minx∈X πn(x) τ(R/σ) τ(−R/σ) ≤ maxx∈X πn(x) minx∈X πn(x) τ(R/σ) τ(−R/σ) ( Inp + (R+ σ)snp ) ≤ ( maxx∈X π(x) minx∈X π(x) )β/n τ(R/σ) τ(−R/σ) (2Rp−1 + (R+ σ)Cp−(ν∧1)/d(log p)γ), where the last inequality is a factor Cπ,n = ( maxx∈X π(x) minx∈X π(x) )β/n larger than the bound on Ln(EI,Dn,H`(X ), R). Corollary 1. The loss of a decaying prior-weighted Expected Improvement strategy, EIπ, is asymptotically equal to the loss of an Expected Improvement strategy, EI: Ln(EIπ,n,Dn,H`(X ), R) ∼ Ln(EIn,Dn,H`(X ), R), (19) so we obtain a convergence rate for EIπ of Ln(EIπ,n,Dn,H`(X ), R) = O(n−(ν∧1)/d(log n)γ). Proof. We simply compute the fraction of the losses in the limit, lim n→∞ Ln(EIπ,Dn,H`(X ), R) Ln(EI,Dn,H`(X ), R) ≤ lim n→∞ ( maxx∈X π(x) minx∈X π(x) )β/n = 1. (20) E.1 SENSITIVITY ANALYSIS ON Cπ,n We now provide additional insight into how Cπ,n depends on the choices of prior and β made by the user. To do so, we consider a typical low-budget setting and display values of Cπ,n at iteration 50. We consider a one-dimensional search space where with a Gaussian prior located in the center of the search space. In the plot below, we display how the choice of σ, given as a percentage of the search space, and β, the prior confidence parameter, yield different values of Cπ,n. We see that, for approximately half of the space, the upper bound on the loss is at least 80% (bright green or yellow) of the upper bound of EI, and only a small region of very narrow priors (dark blue) give a low guaranteed convergence rate. F EXPERIMENT DETAILS F.1 FRAMEWORKS Our implementations of πBO require little change in the supporting frameworks, Spearmint and HyperMapper, and we stay as close to the default settings as possible for each framework. For both Spearmint and HyperMapper, we consider a Matérn 5/2 Kernel. For particularly strong priors, rounding errors can cause the prior to be zero in parts of the search space, potentially affecting πBO’s convergence properties. To avoid these rounding errors and ensure a strictly positive prior, we add a small constant, = 10−12, to the prior throughout the search space for all prior qualities. For the initial sampling from the prior, we truncate the distribution by disallowing sampled points from outside the search space, instead re-sampling such points. During optimization, we do not to explicitly truncate the prior, as points outside the search space are never considered during acquisition function maximization. Thus, the prior is effectively truncated to fit the search space without requiring additional consideration. To the best of our knowledge, there is no publicly available implementation of BOWS, so we reimplemented it in Spearmint. For the Spearmint implementation of BOWS, we provide warped versions of each benchmark, obtaining 20 unique warpings per prior quality and benchmark. We truncate the prior by restricting the warped search space to only include the region which maps back to the original search space through the inverted warping function. For all other approaches, we use the original, publicly available implementations. Notably, the available implementation of Hyperopt TPE does not support bounded search spaces under our priors; as a compromise, when asked to evaluate outside the search space we return an empirically obtained maximum on the objective function inside the search space. We use the search spaces, prior locations and descriptions used by (Souza et al., 2021) for the toy and surrogate HPO problems. We now provide additional details about the benchmarks and case study tasks used, their associated search spaces and priors, and the resources used to run these studies. F.2 BENCHMARKS AND CASE STUDIES Branin The Branin function is a well-known synthetic benchmark for optimization problems. The Branin function has two input dimensions and three global minima. Hartmann-6 The Hartmann-6 function is a well-known synthetic benchmark for optimization problems, which has one global optimum and six dimensions. SVM A hyperparameter-optimization benchmark in 2D based on Profet (Klein et al., 2019). This benchmark is generated by a generative meta-model built using a set of SVM classification models trained on 16 OpenML tasks. The benchmark has two input parameters, corresponding to SVM hyperparameters. FCNet A hyperparameter and architecture optimization benchmark in 6D based on Profet. The FC-Net benchmark is generated by a generative meta-model built using a set of feed-forward neural networks trained on the same 16 OpenML tasks as the SVM benchmark. The benchmark has six input parameters corresponding to network hyperparameters. XGBoost A hyperparameter-optimization benchmark in 8D based on Profet. The XGBoost benchmark is generated by a generative meta-model built using a set of XGBoost regression models in 11 UCI datasets. The benchmark has eight input parameters, corresponding to XGBoost hyperparameters. OpenML MLP The OpenML MLP tuning tasks are provided through HPOBenchEggensperger et al. (2021), and train binary classifiers on real-world datasets. The 5D parameter space consists of four continous parameters and one integer parameter. U-Net Medical The U-Net (Ronneberger et al., 2015) is a popular convolutional neural network architecture for image segmentation. We use the implementation and evaluation setting from the popular NVIDIA deep learning examples repository (Przemek et al.) to build a case study for optimizing hyperparameters for U-Net. The NVIDIA repository is aimed towards the segmentation of neuronal processes in electron microscopy images for the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010). We optimize 6 hyperparameters of the U-Net pipeline. ImageNette ImageNette (Howard, 2019) is a subset of 10 classes of ImageNet (Deng et al., 2009) and is primarily used for algorithm development for the popular FastAI library (Howard et al., 2018). The FastAI library contains a convolutional neural network pipeline for ImageNette, that is used by all competitors on the ImageNette leaderboard. We base our case study on the 80 epoch, 128 resolution setting of this leaderboard and optimize 6 of the hyperparameters of the FastAI ImageNette pipeline. F.3 SEARCH SPACES AND PRIORS The search spaces for each benchmark are summarized in Table 1 (Branin and Profet), Table 2 (OpenML MLP), and Table 3 (ImageNette and U-Net). For the Profet benchmarks, we report the original ranges and whether or not a log scale was used. However, in practice, Profet’s generative model transforms the range of all hyperparameters to a linear [0, 1] range. We use Emukit’s public implementation for these benchmarks (Paleyes et al., 2019). F.4 CASE STUDY DETAILS Training details deep learning case studies Both case studies are based on existing deep learning code, whose hyperparameters we vary according to the HPO. In both case studies, we enabled mixed precision training, and for ImageNette-128 to work in conjunction with Spearmint, we had to enable the MKL_SERVICE_FORCE_INTEL environment flag. For all further details, we refer to the supplementary material containing our code. Resources used for deep learning case studies For U-Net Medical we used one GeForce RTX 2080 Ti GPU, whereas for ImageNette-128 we used two GeForce RTX 2080 Ti GPU’s. Also, we used 4 cores and 8 cores respectively, of an AMD EPYC 7502 32-Core Processor. In Table 4 we list the GPU hours needed for running the deep learning case studies as well as the emitted CO2 equivalents. Assets deep learning case studies In addition to the assets we list in the main paper, the U-Net Medical code base we used employs the 2D EM segmentation challenge dataset (Arganda-Carreras et al., 2015; Cardona et al., 2010), which is available for for the purpose of generating or testing non-commercial image segmentation software. We include licenses of all existing code assets we used in the supplementary material containing our code. G SENSITIVITY TO PRIOR STRENGTH We investigate the performance of πBO when providing priors over the optimum of various qualities. To show the effect of decreasing the prior strength, a grid of prior qualities, with varying widths and offsets from the optimum, are provided. Thus, priors range from the strong prior in the results, to weak, correct priors and sharp, misplaced priors. From Figures 14- 18, it is shown that πBO provides substantial performance across most prior qualities for all benchmarks but Branin, and recoups its early losses on the worst priors in the bottom left corner. πBO demonstrates sensitivity to the width of the prior, as the optimization does not progress as quickly for well-located priors with a larger width. Additionally, πBO’s improvement over the Spearmint + Mode baseline is further emphasized, as this baseline often fails to meaningfully improve over the mode in early iterations.
1. What is the focus of the paper regarding Bayesian optimization? 2. What are the strengths of the proposed approach, particularly in its simplicity and empirical evaluations? 3. What are the weaknesses of the paper, especially regarding its originality and potential applications? 4. Do you have any minor suggestions or questions regarding the presentation and content of the paper?
Summary Of The Paper Review
Summary Of The Paper In this article the authors propose to incorporate domain knowledge on where good configurations are located in Bayesian optimization (BO). They do so by weighting the acquisition function by a prior that decays and reverts to uniform as iterations go. The asymptotic rate of convergence is shown to be of the same order as the non-prior version. Empirical tests are provided on various synthetic and hyperparameter tuning tasks, with diverse baseline methods and ablation studies. Review The paper is clear and with extensive empirical evaluation. An asymptotic convergence results shows that there is only a constant factor difference with the regular method. The main drawback lies in the apparent simplicity of the approach, in terms of originality. Giving more weights to areas supposed to be better in the acquisition function optimization could already exist in application oriented papers (though I did not found such example with a quick search). Minor: Consistent colors/symbols for methods across figures would help. Additional details on how to define the prior would make the paper more self contained. Note that the unbounded BO method would apply with the definition of starting bounds (say taking 50% of the volume under the prior). The differences with BOPrO could be better highlighted. I appreciate the theoretical result albeit one could hope to show some improvement from using a good prior. Typos: P1: documentesd
ICLR
Title Yet another but more efficient black-box adversarial attack: tiling and evolution strategies Abstract We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from `∞-white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to 10, 000 queries, results up to 99.2% of success rate against InceptionV3 classifier with 630 queries to the network on average in the untargeted attacks setting, which is an improvement by 90 queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of 100, 000, 100% of success rate with a budget of 6, 662 queries on average, i.e. we need 800 queries less than the current state of the art. 1 INTRODUCTION Despite their success, deep learning algorithms have shown vulnerability to adversarial attacks (Biggio et al., 2013; Szegedy et al., 2014), i.e. small imperceptible perturbations of the inputs, that lead the networks to misclassify the generated adversarial examples. Since their discovery, adversarial attacks and defenses have become one of the hottest research topics in the machine learning community as serious security issues are raised in many critical fields. They also question our understanding of deep learning behaviors. Although some advances have been made to explain theoretically (Fawzi et al., 2016; Sinha et al., 2017; Cohen et al., 2019; Pinot et al., 2019) and experimentally (Goodfellow et al., 2015; Xie et al., 2018; Meng & Chen, 2017; Samangouei et al., 2018; Araujo et al., 2019) adversarial attacks, the phenomenon remains misunderstood and there is still a gap to come up with principled guarantees on the robustness of neural networks against maliciously crafted attacks. Designing new and stronger attacks helps building better defenses, hence the motivation of our work. First attacks were generated in a setting where the attacker knows all the information of the network (architecture and parameters). In this white box setting, the main idea is to perturb the input in the direction of the gradient of the loss w.r.t. the input (Goodfellow et al., 2015; Kurakin et al., 2016; Carlini & Wagner, 2017; Moosavi-Dezfooli et al., 2016). This case is unrealistic because the attacker has only limited access to the network in practice. For instance, web services that propose commercial recognition systems such as Amazon or Google are backed by pretrained neural networks. A user can query this system by sending an image to classify. For such a query, the user only has access to the inference results of the classifier which might be either the label, probabilities or logits. Such a setting is coined in the literature as the black box setting. It is more realistic but also more challenging from the attacker’s standpoint. As a consequence, several works proposed black box attacks by just querying the inference results of a given classifier. A natural way consists in exploiting the transferability of an adversarial attack, based on the idea that if an example fools a classifier, it is more likely that it fools another one (Papernot et al., 2016a). In this case, a white box attack is crafted on a fully known classifier. Papernot et al. (2017) exploited this property to derive practical black box attacks. Another approach within the black box setting consists in estimating the gradient of the loss by querying the classifier (Chen et al., 2017; Ilyas et al., 2018a;b). For these attacks, the PGD attack (Kurakin et al., 2016; Madry et al., 2018a) algorithm is used and the gradient is replaced by its estimation. In this paper, we propose efficient black box adversarial attacks using stochastic derivative free optimization (DFO) methods with only access to the logits of the classifier. By efficient, we mean that our model requires a limited number of queries while outperforming the state of the art in terms of attack success rate. At the very core of our approach is a new objective function particularly designed to suit classical derivative free optimization. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. It leverages results and ideas from `∞-attacks. We also explore a large spectrum of evolution strategies and other derivative-free optimization methods thanks to the Nevergrad framework (Rapin & Teytaud, 2018). Outline of the paper. We present in Section 2 the related work on adversarial attacks. Section 3 presents the core of our approach. We introduce a new generic objective function and discuss two practical instantiations leading to a discrete and a continuous optimization problems. We then give more details on the best performing derivative-free optimization methods, and provide some insights on our models and optimization strategies. Section 4 is dedicated to a thorough experimental analysis, where we show we reach state of the art performances by comparing our models with the most powerful black-box approaches on both targeted and untargeted attacks. We also assess our models against the most efficient so far defense strategy based on adversarial training. We finally conclude our paper in Section 5. 2 RELATED WORK Adversarial attacks have a long standing history in the machine learning community. Early works appeared in the mid 2000’s where the authors were concerned about Spam classification (Biggio et al., 2009). Szegedy et al. (2014) revives this research topic by highlighting that deep convolutional networks can be easily fooled. Many adversarial attacks against deep neural networks have been proposed since then. One can distinguish two classes of attacks: white box and black box attacks. In the white box setting, the adversary is supposed to have full knowledge of the network (architecture and parameters), while in the black box one, the adversary only has limited access to the network: she does not know the architecture, and can only query the network and gets labels, logits or probabilities from her queries. An attack is said to have suceeded (we also talk about Attack Success Rate), if the input was originally well classified and the generated example is classified to the targeted label. The white box setting attracted more attention even if it is the more unrealistic between the two. The attacks are crafted by by back-propagating the gradient of the loss function w.r.t. the input. The problem writes as a non-convex optimization procedure that either constraints the perturbation or aims at minimizing its norm. Among the most popular ones, one can cite FGSM (Goodfellow et al., 2015), PGD (Kurakin et al., 2016; Madry et al., 2018a), Deepfool (Moosavi-Dezfooli et al., 2016), JSMA (Papernot et al., 2016b), Carlini&Wagner attack (Carlini & Wagner, 2017) and EAD (Chen et al., 2018). The black box setting is more realistic, but also more challenging. Two strategies emerged in the literature to craft attacks within this setting: transferability from a substitute network, and gradient estimation algorithms. Transferability has been pointed out by Papernot et al. (2017). It consists in generating a white-box adversarial example on a fully known substitute neural network, i.e. a network trained on the same classification task. This crafted adversarial example can be transferred to the targeted unknown network. Leveraging this property, Moosavi-Dezfooli et al. (2017) proposed an algorithm to craft a single adversarial attack that is the same for all examples and all networks. Despite the popularity of these methods, gradient estimation algorithms outperform transferability methods. Chen et al. (2017) proposed a variant of the powerful white-box attack introduced in (Carlini & Wagner, 2017), based on gradient estimation with finite differences. This method achieves good results in practice but requires a high number of queries to the network. To reduce the number of queries, Ilyas et al. (2018a) proposed to rely rather on Natural Evolution Strategies (NES). These derivative-free optimization approaches consist in estimating the parametric distribution of the min- ima of a given objective function. This amounts for most of NES algorithms to perform a natural gradient descent in the space of distributions (Ollivier et al., 2017). In (Al-Dujaili & O’Reilly, 2019), the authors propose to rather estimate the sign of the gradient instead of estimating the its magnitude suing zeroth-order optimization techniques. They show further how to reduce the search space from exponential to linear. The achieved results were state of the art at the publication date. In Liu et al. (2019), the authors introduced a zeroth-order version of the signSGD algorithm, studied its convergence properties and showed its efficiency in crafting adversarial black-box attacks. The results are promising but fail to beat the state of the art. In Tu et al. (2019), the authors introduce the AutoZOOM framework combining gradient estimation and an auto-encoder trained offline with unlabeled data. The idea is appealing but requires training an auto-encoder with an available dataset, which an additional effort for the attacker. Besides, this may be unrealistic for several use cases. More recently, Moon et al. (2019) proposed a method based on discrete and combinatorial optimization where the perturbations are pushed towards the corners of the `∞ ball. This method is to the best of our knowledge the state of the art in the black box setting in terms of queries budget and success rate. We will focus in our experiments on this method and show how our approaches achieve better results. Several defense strategies have been proposed to diminish the impact of adversarial attacks on networks accuracies. A basic workaround, introduced in (Goodfellow et al., 2015), is to augment the learning set with adversarial attacks examples. Such an approach is called adversarial training in the literature. It helps recovering some accuracy but fails to fully defend the network, and lacks theoretical guarantees, in particular principled certificates. Defenses based on randomization at inference time were also proposed (Lecuyer et al., 2018; Cohen et al., 2019; Pinot et al., 2019). These methods are grounded theoretically, but the guarantees cannot ensure full protection against adversarial examples. The question of defenses and attacks is still widely open since our understanding of this phenomenon is still in its infancy. We evaluate our approach against adversarial training, the most powerful defense method so far. 3 METHODS 3.1 GENERAL FRAMEWORK Let us consider a classification task X 7→ [K] where X ⊆ Rd is the input space and [K] = {1, ...,K} is the corresponding label set. Let f : Rd → RK be a classifier (a feed forward neural network in our paper) from an input space X returning the logits of each label in [K] such that the predicted label for a given input is argmaxi∈[K] fi(x). The aim of ||.||∞-bounded untargeted adversarial attacks is, for some input x with label y, to find a perturbation τ such that argmaxi∈[K] fi(x) 6= y. Classically, ||.||∞-bounded untargeted adversarial attacks aims at optimizing the following objective: max τ :||τ ||∞≤ L(f(x+ τ), y) (1) where L is a loss function (typically the cross entropy) and y the true label. For targeted attacks, the attacker targets a label yt by maximizing −L(f(x+ τ), yt). With access to the gradients of the network, gradient descent methods have proved their efficiency (Kurakin et al., 2016; Madry et al., 2018a). So far, the outline of most black box attacks was to estimate the gradient using either finite differences or natural evolution strategies. Here using evolutionary strategies heuristics, we do not want to take care of the gradient estimation problem. 3.2 TWO OPTIMIZATION PROBLEMS In some DFO approaches, the default search space is Rd. In the `∞ bounded adversarial attacks setting, the search space isB∞( ) = {τ : ||τ ||∞ ≤ }. It requires to adapt the problem in Eq 1. Two variants are proposed in the sequel leading to continuous and discretized versions of the problem. The continuous problem. As in Carlini & Wagner (2017), we use the hyperbolic tangent transformation to restate our problem since B∞( ) = tanh (Rd). This leads to a continuous search space on which evolutionary strategies apply. Hence our optimization problem writes: max τ∈Rd L(f(x+ tanh(τ)), y). (2) We will call this problem DFOc− optimizer where optimizer is the used black box derivative free optimization strategy. The discretized problem. Moon et al. (2019) pointed out that PGD attacks (Kurakin et al., 2016; Madry et al., 2018b) are mainly located on the corners of the `∞-ball. They consider optimizing the following max τ∈{− ,+ }d L(f(x+ τ), y). (3) The author in (Moon et al., 2019) proposed a purely discrete combinatorial optimization to solve this problem (Eq. 3). As in Bello et al. (2017), we here consider how to automatically convert an algorithm designed for continuous optimization to discrete optimization. To make the problem in Eq. 3 compliant with our evolutionary strategies setting, we rewrite our problem by considering a stochastic function f(x + τ) where, for all i, τi ∈ {−1,+1} and P(τi = 1) = Softmax(ai, bi) = eai eai+ebi . Hence our problem amounts to find the best parameters ai and bi that optimize: min a,b Eτ∼Pa,b(L(f(x+ τ), y) (4) We then rely on evolutionary strategies to find the parameters a and b. As the optima are deterministic, the optimal values for a and b are at infinity. Some ES algorithms are well suited to such setting as will be discussed in the sequel. We will call this problem DFOd− optimizer where optimizer is the used black box derivative free optimization strategy for a and b. In this case, one could reduce the problem to one variale ai with P(τi = 1) = 11+e−ai , but experimentally the results are comparable, so we concentrate on Problem 4. 3.3 DERIVATIVE-FREE OPTIMIZATION METHODS Derivative-free optimization methods are aimed at optimizing an objective function without access to the gradient. There exists a large and wide literature around derivative free optimisation. In this setting, one algorithm aims to minimize some function f on some space X . The only thing that could be done by this algorithm is to query for some points x the value of f(x). As evaluating f can be computationally expensive, the purpose of DFO methods is to get a good approximation of the optima using a moderate number of queries. We tested several evolution strategies (Rechenberg, 1973; Beyer, 2001): the simple (1+ 1)-algorithm (Matyas, 1965; Schumer & Steiglitz, 1968), Covariance Matrix Adaptation (CMA (Hansen & Ostermeier, 2003)). For these methods, the underlying algorithm is to iteratively update some distribution Pθ defined on X . Roughly speaking, the current distribution Pθ represents the current belief of the localization of the optimas of the goal function. The parameters are updated using objective function values at different points. It turns out that this family of algorithms, than can be reinterpreted as natural evolution strategies, perform best. The two best performing methods will be detailed in Section 3.3.1; we refer to references above for other tested methods. 3.3.1 OUR BEST PERFORMING METHODS: EVOLUTION STRATEGIES The (1 + 1)-ES algorithm. The (1 + 1)-evolution strategy with one-fifth rule (Matyas, 1965; Schumer & Steiglitz, 1968) is a simple but effective derivative-free optimization algorithm (in supplementary material, Alg. 1). Compared to random search, this algorithm moves the center of the Gaussian sampling according to the best candidate and adapts its scale by taking into account their frequency. Yao & Liu (1996) proposed the use of Cauchy distributions instead of classical Gaussian sampling. This favors large steps, and improves the results in case of (possibly partial) separability of the problem, i.e. when it is meaningful to perform large steps in some directions and very moderate ones in the other directions. CMA-ES algorithm. The Covariance Matrix Adaptation Evolution Strategy (Hansen & Ostermeier, 2003) combines evolution strategies (Beyer, 2001), Cumulative Step-Size Adaptation (Arnold & Beyer, 2004), and a specific method for adaptating the covariance matrix. An outline is provided in supplementary material, Alg. 2. CMA-ES is an effective and robust algorithm, but it becomes catastrophically slow in high dimension due to the expensive computation of the square root of the matrix. As a workaround, Ros & Hansen (2008) propose to approximate the covariance matrix by a diagonal one. This leads to a computational cost linear in the dimension, rather than the original quadratic one. Link with Natural Evolution Strategy (NES) attacks. Both (1+1)-ES and CMA-ES can be seen as an instantiation of a natural evolution strategy (see for instance Ollivier et al. (2017); Wierstra et al. (2014)). A natural evolution strategy consists in estimating iteratively the distribution of the optima. For most NES approaches, a fortiori CMA-ES, the iterative estimation consists in a second-order gradient descent (also known as natural gradient) in the space of distributions (e.g. Gaussians). (1+1)-ES can also be seen as a NES, where the covariance matrix is restricted to be proportional to the identity. Note however that from an algorithmic perspective, both CME-ES and (1+1)-ES optimize the quantile of the objective function. 3.3.2 HYPOTHESES FOR DFO METHODS IN THE ADVERSARIAL ATTACKS CONTEXT The state of the art in DFO and intuition suggest the followings. Using softmax for exploring only points in the corner (Eq. 3) is better for moderate budget, as corners are known to be good adversarial candidates; however, for high precision attacks (with small τ ) a smooth continuous precision (Eq 2) is more relevant. With or without softmax, the optimum is at infinity 1, which is in favor of methods having fast step-size adaptation or samplings with heavy-tail distributions. With an optimum at infinity, (Chotard et al., 2012) has shown how fast is the adaptation of the step-size when using cumulative step-size adaptation (as in CMA-ES), as opposed to slower rates for most methods. Cauchy sampling (Yao & Liu, 1996) in the (1 + 1)-ES is known for favoring fast changes; this is consistent with the superiority of Cauchy sampling in our setting compared to Gaussian sampling. Newuoa, Powell, SQP, Bayesian Optimization, Bayesian optimization are present in Nevergrad but they have an expensive (budget consumption linear is linear w.r.t. the dimension) initial sampling stage which is not possible in our high-dimensional / moderate budget context. The targeted case needs more precision and favors algorithms such as Diagonal CMA-ES which adapt a step-size per coordinate whereas the untargeted case is more in favor of fast random exploration such as the (1 + 1)-ES. Compared to Diagonal-CMA, CMA with full covariance might be too slow; given a number of queries (rather than a time budget) it is however optimal for high precision. 3.4 THE TILING TRICK Ilyas et al. (2018b) suggested to tile the attack to lower the number of queries necessary to fool the network. Concretely, they observe that the gradient coordinates are correlated for close pixels in the images, so they suggested to add the same noise for small square tiles in the image (see Fig. 1). We exploit the same trick since it reduces the dimensionality of the search space, and makes hence evolutionary strategies suited to the problem at hand. Besides breaking the curse of dimensionality, tiling leads surprisingly to a new property that we discovered during our experiments. At a given tiling scale, convolutional neural networks are not robust to random noise. Section 4.2 is devoted to this intriguing property. Interestingly enough, initializing our optimization algorithms with a tiled noise at the appropriate scale drastically speeds up the convergence, leading to a reduced number of queries. 1i.e. the optima of the ball constrained problem 1, would be close to the boundary or on the boundary of the `∞ ball. In that case, the optimum of the continuous problem 2 will be at ∞ or close to it. On the discrete case 4 it is easy to see that the optimum is when ai or bi → ∞. 4 EXPERIMENTS 4.1 GENERAL SETTING AND IMPLEMENTATION DETAILS We compare our approach to the “bandits” method (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019). The latter (parsimonious attack) is, to the best of our knowledge, the state of the art in the black-box setting from the literature; bandits method is also considered in our benchmark given its ties to our models. We reproduced the results from (Moon et al., 2019) in our setting for fair comparison. As explained in section 3.2, our attacks can be interpreted as `∞ ones. We use the large-scale ImageNet dataset (Deng et al., 2009). As usually done in most frameworks, we quantify our success in terms of attack success rate, median queries and average queries. Here, the number of queries refers to the number of requests to the output logits of a classifier for a given image. For the success rate, we only consider the images that were correctly classified by our model. We use InceptionV3 (Szegedy et al., 2017) , VGG16 (Simonyan & Zisserman, 2014) with batch normalization (VGG16bn) and ResNet50 (He et al., 2016) architectures to measure the performance of our algorithm on the ImageNet dataset. These models reach accuracy close to the the state of the art with around 75 − 80% for the Top-1 accuracy and 95% for the Top-5 accuracy. We use pretrained models from PyTorch (Paszke et al., 2017). All images are normalized to [0, 1]. Results on VGG16bn and ResNet50 are deferred in supplementary material E. The images to be attacked are selected at random. We first show that convolutional networks are not robust to tiled random noise, and more surprisingly that there exists an optimal tile size that is the same for all architectures and noise intensities. Then, we evaluate our methods on both targeted and untargeted objectives. We considered the following losses: the cross entropy L(f(x), y) = − log(P(y|x)) and a loss inspired from the “Carlini&Wagner” attack: L(f(x), y) = −P(y|x) + maxy′ 6=y P(y′|x) where P(y|x) = [Softmax(f(x))]y , the probability for the classifier to classify the input x to label y. The results for the second loss are deferred in supplementary material C. For all our attacks, we use the Nevergrad (Rapin & Teytaud, 2018) implementation of evolution strategies. We did not change the default parameters of the optimization strategies. 4.2 CONVOLUTIONAL NEURAL NETWORKS ARE NOT ROBUST TO TILED RANDOM NOISE In this section, we highlight that neural neural networks are not robust to `∞ tiled random noise. A noise on an image is said to be tiled if the added noise on the image is the same on small squares of pixels (see Figure 2). In practice, we divide our image in equally sized tiles. For each tile, we add to the image a randomly chosen constant noise: + with probability 12 and − with probability 1 2 , uniformly on the tile. The tile trick has been introduced inIlyas et al. (2018a) for dimensionality reduction. Here we exhibit a new behavior that we discovered during our experiments. As shown in Fig. 1 for reasonable noise intensity ( = 0.05), the success rate of a one shot randomly tiled attack is quite high. This fact is observed on many neural network architectures. We compared the number of tiles since the images input size are not the same for all architectures (299 × 299 × 3 for InceptionV3 and 224 × 224 × 3 for VGG16bn and ResNet50). The optimal number of tiles (in the sense of attack success rate) is, surprisingly, independent from the architecture and the noise intensity. We also note that the InceptionV3 architecture is more robust to random tiled noise than VGG16bn and ResNet50 architectures. InceptionV3 blocks are parallel convolutions with different filter sizes that are concatenated. Using different filter sizes may attenuate the effect of the tiled noise since some convolution sizes might be less sensitive. We test this with a single random attack with various numbers of tiles (cf. Figure 1, 2). We plotted additional graphs in supplementary material B. 4.3 UNTARGETED ADVERSARIAL ATTACKS We first evaluate our attacks in the untargeted setting. The aim is to change the predicted label of the classifier. Following (Moon et al., 2019; Ilyas et al., 2018b), we use 10, 000 images that are initially correctly classified and we limit the budget to 10, 000 queries. We experimented with 30 and 50 tiles on the images. Only the best performing methods are reported in Table 1. We compare our results with (Moon et al., 2019) and (Ilyas et al., 2018b) on InceptionV3 (cf. Table 1). We also plotted the cumulative success rate in terms of required budget in Figure 3. We also evaluated our attacks for smaller noise in supplementary material D. We achieve results outperforming or at least equal to the state of the art in all cases. More remarkably, We improve by far the number of necessary queries to fool the classifiers. The tiling trick partially explains why the average and the median number of queries are low. Indeed, the first queries of our evolution strategies is in general close to random search and hence, according to the observation of Figs 1-2, the first steps are more likely to fool the network, which explains why the queries budget remains low. This Discrete strategies reach better median numbers of queries - which is consistent as we directly search on the limits of the `∞-ball; however, given the restricted search space (only corners of the search space are considered), the success rate is lower and on average the number of queries increases due to hard cases. 4.4 TARGETED ADVERSARIAL ATTACKS We also evaluate our methods in the targeted case on ImageNet dataset. We selected 1, 000 images, correctly classified. Since the targeted task is harder than the untargeted case, we set the maximum budget to 100, 000 queries, and = 0.05. We uniformly chose the target class among the incorrect ones. We evaluated our attacks in comparison with the bandits methods (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019) on InceptionV3 classifier. We also plotted the cumulative success rate in terms of required budget in Figure 3. CMA-ES beats the state of the art on all criteria. DiagonalCMA-ES obtains acceptable results but is less powerful that CMA-ES in this specific case. The classical CMA optimizer is more precise, even if the run time is much longer. Cauchy (1 + 1)- ES and discretized optimization reach good results, but when the task is more complicated they do not reach as good results as the state of the art in black box targeted attacks. 4.5 UNTARGETED ATTACKS AGAINST AN ADVERSARIALLY TRAINED NETWORK In this section, we experiment our attacks against a defended network by adversarial training (Goodfellow et al., 2015). Since adversarial training is computationally expensive, we restricted ourselves to the CIFAR10 dataset (Krizhevsky et al., 2009) for this experiment. Image size is 32 × 32 × 3. We adversarially trained a WideResNet28x10 (Zagoruyko & Komodakis, 2016) with PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 10 steps of size 2/256. In this setting, we randomly selected 1, 000 images, and limited the budget to 20, 000 queries. We ran PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 20 steps of size 1/256 against our network, and achieved a success rate up to 36%, which is the the state of the art in the white box setting. We also compared our method to the Parsimonious and bandit attacks. Results are reported in Appendix 6. On this task, the parsimonious attack method is slightly better than our best approach. 5 CONCLUSION In this paper, we proposed a new framework for crafting black box adversarial attacks based on derivative free optimization. Because of the high dimensionality and the characteristics of the problem (see Section 3.3.2), not all optimization strategies give satisfying results. However, combined with the tiling trick, evolutionary strategies such as CMA, DiagonalCMA and Cauchy (1+1)-ES beats the current state of the art in both targeted and untargeted settings. In particular, DFOc−CMA improves the state of the art in terms of success rate in almost all settings. We also validated the robustness of our attack against an adversarially trained network. Future work will be devoted to better understanding the intriguing property of the effect that a neural network is not robust to a one shot randomly tiled attack. A ALGORITHMS A.1 THE (1+1)-ES ALGORITHM Algorithm 1 The (1 + 1) Evolution Strategy. Require: Function f : Rd → R to minimize m← 0, C ← Id, σ ← 1 for t = 1...n do (Generate candidates) Generate m′ ∼ m+ σX where X is sampled from a Cauchy or Gaussian distribution. if f(m′) ≤ f(m) then m← m′, σ ← 2σ else σ ← 2− 14σ end if end for A.2 CMA-ES ALGORITHM Algorithm 2 CMA-ES algorithm. The T subscript denotes transposition. Require: Function f : Rd → R to minimize, parameters b, c, w1 > . . . , wµ > 0, pc and others as in e.g. (Hansen & Ostermeier, 2003). m← 0, C ← Id, σ ← 1 for t = 1...n do Generate x1, ..., xλ ∼ m+ σN (0, C). Define x′i the i th best of the xi. Update the cumulation for C: pc ← cumulation of pc, overall direction of progress. Update the covariance matrix: C ← (1− c) C︸︷︷︸ inertia + c b (pc × pTc )︸ ︷︷ ︸ overall direction +c(1− 1 b ) µ∑ i=1 wi x′i −m σ × (x ′ i −m)T σ︸ ︷︷ ︸ “covariance” of the 1σx ′ i Update mean: m← µ∑ i=1 wixi:λ Update σ by cumulative step-size adaptation (Arnold & Beyer, 2004). end for B ADDITIONAL PLOTS FOR THE TILING TRICK C RESULTS WITH “CARLINI&WAGNER” LOSS In this section, we follow the same experimental setup as in Section 4.3, but we built our attacks with the “Carlini&Wagner” loss instead of the cross entropy. We remark the results are comparable and similar. D UNTARGETED ATTACKS WITH SMALLER NOISE INTENSITIES We evaluated our method on smaller noise intensities ( ∈ {0.01, 0.03, 0.05}) in the untargeted setting on ImageNet dataset. In this framework, we also picked up randomly 10, 000 images and limited our budget to 10, 000 queries. We compared to the bandits method (Ilyas et al., 2018b) and to the parsimonious attack (Moon et al., 2019) on InceptionV3 network. We limited our experiments to a number of tiles of 50. We report our results in Table 4. We remark our attacks reach state of the art for = 0.03 and = 0.05 both in terms of success rate and queries budget. For = 0.01, we reach results comparable to the state of the art. E UNTARGETED ATTACKS AGAINST OTHER ARCHITECTURES We also evaluated our method on different neural networks architectures. For each network we randomly selected 10, 000 images that were correctly classified. We limit our budget to 10, 000 queries and set the number of tiles to 50. We achieve a success attack rate up to 100% on every classifier with a budget as low as 8 median queries for the VGG16bn for instance (see Table 5). One should notice that the performances are lower on InceptionV3 as it is also reported for the bandit methods in (Ilyas et al., 2018b). This possibly due to the fact that the tiling trick is less relevant on the Inception network than on the other networks (see Fig. 2). F TABLE FOR ATTACKS AGAINST ADVERSARIALLY TRANINED NETWORK G FAILING METHODS In this section, we compare our attacks to other optimization strategies. We run our experiments in the same setup as in Section 4.3. Results are reported in Table 7. DE and Normal (1+1)-ES performs poorly, probably because these optimization strategies converge slower when the optima are at “infinity”. We reformulate this sentence accordingly in the updated version of the paper. Finally, as the initialization of Powell is linear with the dimension and with less variance, it performs poorer than simple random search. Newuoa, SQP and Cobyla algorithms have also been tried on a smaller number images (we did not report the results), but their initialization is also linear in the dimension, so they reach very poor results too.
1. What is the focus of the paper regarding query-efficient black-box attacks? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any questions or concerns about the tiling trick and its contribution to the attack efficiency? 4. How does the reviewer assess the novelty and insightfulness of the paper's content? 5. Are there any suggestions for improving the paper, such as providing more intuition, explanations, or demonstrative experiments? 6. What is the significance of the results in Table 1 and Table 3, and how do they relate to the research question? 7. How does the reviewer evaluate the effectiveness and efficiency of the proposed method in comparison to other black-box attacks?
Review
Review This paper proposed a new query efficient black-box attack algorithm using better evolution strategies. The authors also add tiling trick to make the attack even more efficient. The experimental results show that the proposed method achieves state-of-the-art attack efficiency in black-box setting. The paper indeed presented slightly better results than the current state-of-the-art black-box attacks. It is clearly written and easy to follow, however, the paper itself does not bring much insightful information. The major components of the proposed method are two things: using better evolution strategies and using tiling trick. The tiling trick is not something new, it is introduced in (Ilyas et al., 2018) and also discussed in (Moon et al., 2019). The authors further empirically studied the best choice of tiling size. I appreciated that, but will not count it as a major contribution. In terms of better evolution strategies, the authors show that (1+1) and CMA-EA can achieve better attack result but it lacks intuition/explanations why these helps, what is the difference. It would be best if the authors could provide some theories to show the advantages of the proposed method, if not, at least the authors should give more intuition/explanation/demonstrative experiments to show the advantages. Detailed comments: - In section 3.2, is the form of the discretized problem a standard way to transform from continuous to discrete one? What is the intuition of using a and b? Have you considered using only one variable to do it? - In section 3.3.2 what do you mean by “with or without softmax, the optimum is at infinity”? I hope the authors could further explain it. - In eq (2), do you mean max_{\tau} L(f(x + \epsilon tanh(\tau)), y) ? - In section 3.3.1, the authors said (1+1)-ES and CMA-ES can be seen as an instantiation of NES. Can the authors further elaborate on this? - Can the authors provide algorithm for DiagonalCMA? - It is better to put the evolution strategy algorithms in the main paper and discuss it. - Can the authors also comment/compare the results with the following relevant paper? Li, Yandong, et al. "NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks." ICML 2019. Chen, Jinghui, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." arXiv preprint arXiv:1811.10828 (2018). - In Table 1, why for Parsimonious and Bandit methods, # of tiles parts are missing? I think both of the baselines use tilting trick? And they should also run using the optimal tiling size? The result seems directly copied from the Parsimonious paper? It makes more sense to rerun it in your setting and environment cause the sampled data points may not be the same. Since CMA costs significantly more time, it makes a fair comparison to also report the attack time needed for each method. - In Table 3, why did not compare with Bandit and Parsimonious attacks? ====================== after the rebuttal I thank the authors for their response but I still feel that there is a lot more to improve for this paper in terms of intuition and experiments. Therefore I decided to keep my score unchanged.
ICLR
Title Yet another but more efficient black-box adversarial attack: tiling and evolution strategies Abstract We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from `∞-white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to 10, 000 queries, results up to 99.2% of success rate against InceptionV3 classifier with 630 queries to the network on average in the untargeted attacks setting, which is an improvement by 90 queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of 100, 000, 100% of success rate with a budget of 6, 662 queries on average, i.e. we need 800 queries less than the current state of the art. 1 INTRODUCTION Despite their success, deep learning algorithms have shown vulnerability to adversarial attacks (Biggio et al., 2013; Szegedy et al., 2014), i.e. small imperceptible perturbations of the inputs, that lead the networks to misclassify the generated adversarial examples. Since their discovery, adversarial attacks and defenses have become one of the hottest research topics in the machine learning community as serious security issues are raised in many critical fields. They also question our understanding of deep learning behaviors. Although some advances have been made to explain theoretically (Fawzi et al., 2016; Sinha et al., 2017; Cohen et al., 2019; Pinot et al., 2019) and experimentally (Goodfellow et al., 2015; Xie et al., 2018; Meng & Chen, 2017; Samangouei et al., 2018; Araujo et al., 2019) adversarial attacks, the phenomenon remains misunderstood and there is still a gap to come up with principled guarantees on the robustness of neural networks against maliciously crafted attacks. Designing new and stronger attacks helps building better defenses, hence the motivation of our work. First attacks were generated in a setting where the attacker knows all the information of the network (architecture and parameters). In this white box setting, the main idea is to perturb the input in the direction of the gradient of the loss w.r.t. the input (Goodfellow et al., 2015; Kurakin et al., 2016; Carlini & Wagner, 2017; Moosavi-Dezfooli et al., 2016). This case is unrealistic because the attacker has only limited access to the network in practice. For instance, web services that propose commercial recognition systems such as Amazon or Google are backed by pretrained neural networks. A user can query this system by sending an image to classify. For such a query, the user only has access to the inference results of the classifier which might be either the label, probabilities or logits. Such a setting is coined in the literature as the black box setting. It is more realistic but also more challenging from the attacker’s standpoint. As a consequence, several works proposed black box attacks by just querying the inference results of a given classifier. A natural way consists in exploiting the transferability of an adversarial attack, based on the idea that if an example fools a classifier, it is more likely that it fools another one (Papernot et al., 2016a). In this case, a white box attack is crafted on a fully known classifier. Papernot et al. (2017) exploited this property to derive practical black box attacks. Another approach within the black box setting consists in estimating the gradient of the loss by querying the classifier (Chen et al., 2017; Ilyas et al., 2018a;b). For these attacks, the PGD attack (Kurakin et al., 2016; Madry et al., 2018a) algorithm is used and the gradient is replaced by its estimation. In this paper, we propose efficient black box adversarial attacks using stochastic derivative free optimization (DFO) methods with only access to the logits of the classifier. By efficient, we mean that our model requires a limited number of queries while outperforming the state of the art in terms of attack success rate. At the very core of our approach is a new objective function particularly designed to suit classical derivative free optimization. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. It leverages results and ideas from `∞-attacks. We also explore a large spectrum of evolution strategies and other derivative-free optimization methods thanks to the Nevergrad framework (Rapin & Teytaud, 2018). Outline of the paper. We present in Section 2 the related work on adversarial attacks. Section 3 presents the core of our approach. We introduce a new generic objective function and discuss two practical instantiations leading to a discrete and a continuous optimization problems. We then give more details on the best performing derivative-free optimization methods, and provide some insights on our models and optimization strategies. Section 4 is dedicated to a thorough experimental analysis, where we show we reach state of the art performances by comparing our models with the most powerful black-box approaches on both targeted and untargeted attacks. We also assess our models against the most efficient so far defense strategy based on adversarial training. We finally conclude our paper in Section 5. 2 RELATED WORK Adversarial attacks have a long standing history in the machine learning community. Early works appeared in the mid 2000’s where the authors were concerned about Spam classification (Biggio et al., 2009). Szegedy et al. (2014) revives this research topic by highlighting that deep convolutional networks can be easily fooled. Many adversarial attacks against deep neural networks have been proposed since then. One can distinguish two classes of attacks: white box and black box attacks. In the white box setting, the adversary is supposed to have full knowledge of the network (architecture and parameters), while in the black box one, the adversary only has limited access to the network: she does not know the architecture, and can only query the network and gets labels, logits or probabilities from her queries. An attack is said to have suceeded (we also talk about Attack Success Rate), if the input was originally well classified and the generated example is classified to the targeted label. The white box setting attracted more attention even if it is the more unrealistic between the two. The attacks are crafted by by back-propagating the gradient of the loss function w.r.t. the input. The problem writes as a non-convex optimization procedure that either constraints the perturbation or aims at minimizing its norm. Among the most popular ones, one can cite FGSM (Goodfellow et al., 2015), PGD (Kurakin et al., 2016; Madry et al., 2018a), Deepfool (Moosavi-Dezfooli et al., 2016), JSMA (Papernot et al., 2016b), Carlini&Wagner attack (Carlini & Wagner, 2017) and EAD (Chen et al., 2018). The black box setting is more realistic, but also more challenging. Two strategies emerged in the literature to craft attacks within this setting: transferability from a substitute network, and gradient estimation algorithms. Transferability has been pointed out by Papernot et al. (2017). It consists in generating a white-box adversarial example on a fully known substitute neural network, i.e. a network trained on the same classification task. This crafted adversarial example can be transferred to the targeted unknown network. Leveraging this property, Moosavi-Dezfooli et al. (2017) proposed an algorithm to craft a single adversarial attack that is the same for all examples and all networks. Despite the popularity of these methods, gradient estimation algorithms outperform transferability methods. Chen et al. (2017) proposed a variant of the powerful white-box attack introduced in (Carlini & Wagner, 2017), based on gradient estimation with finite differences. This method achieves good results in practice but requires a high number of queries to the network. To reduce the number of queries, Ilyas et al. (2018a) proposed to rely rather on Natural Evolution Strategies (NES). These derivative-free optimization approaches consist in estimating the parametric distribution of the min- ima of a given objective function. This amounts for most of NES algorithms to perform a natural gradient descent in the space of distributions (Ollivier et al., 2017). In (Al-Dujaili & O’Reilly, 2019), the authors propose to rather estimate the sign of the gradient instead of estimating the its magnitude suing zeroth-order optimization techniques. They show further how to reduce the search space from exponential to linear. The achieved results were state of the art at the publication date. In Liu et al. (2019), the authors introduced a zeroth-order version of the signSGD algorithm, studied its convergence properties and showed its efficiency in crafting adversarial black-box attacks. The results are promising but fail to beat the state of the art. In Tu et al. (2019), the authors introduce the AutoZOOM framework combining gradient estimation and an auto-encoder trained offline with unlabeled data. The idea is appealing but requires training an auto-encoder with an available dataset, which an additional effort for the attacker. Besides, this may be unrealistic for several use cases. More recently, Moon et al. (2019) proposed a method based on discrete and combinatorial optimization where the perturbations are pushed towards the corners of the `∞ ball. This method is to the best of our knowledge the state of the art in the black box setting in terms of queries budget and success rate. We will focus in our experiments on this method and show how our approaches achieve better results. Several defense strategies have been proposed to diminish the impact of adversarial attacks on networks accuracies. A basic workaround, introduced in (Goodfellow et al., 2015), is to augment the learning set with adversarial attacks examples. Such an approach is called adversarial training in the literature. It helps recovering some accuracy but fails to fully defend the network, and lacks theoretical guarantees, in particular principled certificates. Defenses based on randomization at inference time were also proposed (Lecuyer et al., 2018; Cohen et al., 2019; Pinot et al., 2019). These methods are grounded theoretically, but the guarantees cannot ensure full protection against adversarial examples. The question of defenses and attacks is still widely open since our understanding of this phenomenon is still in its infancy. We evaluate our approach against adversarial training, the most powerful defense method so far. 3 METHODS 3.1 GENERAL FRAMEWORK Let us consider a classification task X 7→ [K] where X ⊆ Rd is the input space and [K] = {1, ...,K} is the corresponding label set. Let f : Rd → RK be a classifier (a feed forward neural network in our paper) from an input space X returning the logits of each label in [K] such that the predicted label for a given input is argmaxi∈[K] fi(x). The aim of ||.||∞-bounded untargeted adversarial attacks is, for some input x with label y, to find a perturbation τ such that argmaxi∈[K] fi(x) 6= y. Classically, ||.||∞-bounded untargeted adversarial attacks aims at optimizing the following objective: max τ :||τ ||∞≤ L(f(x+ τ), y) (1) where L is a loss function (typically the cross entropy) and y the true label. For targeted attacks, the attacker targets a label yt by maximizing −L(f(x+ τ), yt). With access to the gradients of the network, gradient descent methods have proved their efficiency (Kurakin et al., 2016; Madry et al., 2018a). So far, the outline of most black box attacks was to estimate the gradient using either finite differences or natural evolution strategies. Here using evolutionary strategies heuristics, we do not want to take care of the gradient estimation problem. 3.2 TWO OPTIMIZATION PROBLEMS In some DFO approaches, the default search space is Rd. In the `∞ bounded adversarial attacks setting, the search space isB∞( ) = {τ : ||τ ||∞ ≤ }. It requires to adapt the problem in Eq 1. Two variants are proposed in the sequel leading to continuous and discretized versions of the problem. The continuous problem. As in Carlini & Wagner (2017), we use the hyperbolic tangent transformation to restate our problem since B∞( ) = tanh (Rd). This leads to a continuous search space on which evolutionary strategies apply. Hence our optimization problem writes: max τ∈Rd L(f(x+ tanh(τ)), y). (2) We will call this problem DFOc− optimizer where optimizer is the used black box derivative free optimization strategy. The discretized problem. Moon et al. (2019) pointed out that PGD attacks (Kurakin et al., 2016; Madry et al., 2018b) are mainly located on the corners of the `∞-ball. They consider optimizing the following max τ∈{− ,+ }d L(f(x+ τ), y). (3) The author in (Moon et al., 2019) proposed a purely discrete combinatorial optimization to solve this problem (Eq. 3). As in Bello et al. (2017), we here consider how to automatically convert an algorithm designed for continuous optimization to discrete optimization. To make the problem in Eq. 3 compliant with our evolutionary strategies setting, we rewrite our problem by considering a stochastic function f(x + τ) where, for all i, τi ∈ {−1,+1} and P(τi = 1) = Softmax(ai, bi) = eai eai+ebi . Hence our problem amounts to find the best parameters ai and bi that optimize: min a,b Eτ∼Pa,b(L(f(x+ τ), y) (4) We then rely on evolutionary strategies to find the parameters a and b. As the optima are deterministic, the optimal values for a and b are at infinity. Some ES algorithms are well suited to such setting as will be discussed in the sequel. We will call this problem DFOd− optimizer where optimizer is the used black box derivative free optimization strategy for a and b. In this case, one could reduce the problem to one variale ai with P(τi = 1) = 11+e−ai , but experimentally the results are comparable, so we concentrate on Problem 4. 3.3 DERIVATIVE-FREE OPTIMIZATION METHODS Derivative-free optimization methods are aimed at optimizing an objective function without access to the gradient. There exists a large and wide literature around derivative free optimisation. In this setting, one algorithm aims to minimize some function f on some space X . The only thing that could be done by this algorithm is to query for some points x the value of f(x). As evaluating f can be computationally expensive, the purpose of DFO methods is to get a good approximation of the optima using a moderate number of queries. We tested several evolution strategies (Rechenberg, 1973; Beyer, 2001): the simple (1+ 1)-algorithm (Matyas, 1965; Schumer & Steiglitz, 1968), Covariance Matrix Adaptation (CMA (Hansen & Ostermeier, 2003)). For these methods, the underlying algorithm is to iteratively update some distribution Pθ defined on X . Roughly speaking, the current distribution Pθ represents the current belief of the localization of the optimas of the goal function. The parameters are updated using objective function values at different points. It turns out that this family of algorithms, than can be reinterpreted as natural evolution strategies, perform best. The two best performing methods will be detailed in Section 3.3.1; we refer to references above for other tested methods. 3.3.1 OUR BEST PERFORMING METHODS: EVOLUTION STRATEGIES The (1 + 1)-ES algorithm. The (1 + 1)-evolution strategy with one-fifth rule (Matyas, 1965; Schumer & Steiglitz, 1968) is a simple but effective derivative-free optimization algorithm (in supplementary material, Alg. 1). Compared to random search, this algorithm moves the center of the Gaussian sampling according to the best candidate and adapts its scale by taking into account their frequency. Yao & Liu (1996) proposed the use of Cauchy distributions instead of classical Gaussian sampling. This favors large steps, and improves the results in case of (possibly partial) separability of the problem, i.e. when it is meaningful to perform large steps in some directions and very moderate ones in the other directions. CMA-ES algorithm. The Covariance Matrix Adaptation Evolution Strategy (Hansen & Ostermeier, 2003) combines evolution strategies (Beyer, 2001), Cumulative Step-Size Adaptation (Arnold & Beyer, 2004), and a specific method for adaptating the covariance matrix. An outline is provided in supplementary material, Alg. 2. CMA-ES is an effective and robust algorithm, but it becomes catastrophically slow in high dimension due to the expensive computation of the square root of the matrix. As a workaround, Ros & Hansen (2008) propose to approximate the covariance matrix by a diagonal one. This leads to a computational cost linear in the dimension, rather than the original quadratic one. Link with Natural Evolution Strategy (NES) attacks. Both (1+1)-ES and CMA-ES can be seen as an instantiation of a natural evolution strategy (see for instance Ollivier et al. (2017); Wierstra et al. (2014)). A natural evolution strategy consists in estimating iteratively the distribution of the optima. For most NES approaches, a fortiori CMA-ES, the iterative estimation consists in a second-order gradient descent (also known as natural gradient) in the space of distributions (e.g. Gaussians). (1+1)-ES can also be seen as a NES, where the covariance matrix is restricted to be proportional to the identity. Note however that from an algorithmic perspective, both CME-ES and (1+1)-ES optimize the quantile of the objective function. 3.3.2 HYPOTHESES FOR DFO METHODS IN THE ADVERSARIAL ATTACKS CONTEXT The state of the art in DFO and intuition suggest the followings. Using softmax for exploring only points in the corner (Eq. 3) is better for moderate budget, as corners are known to be good adversarial candidates; however, for high precision attacks (with small τ ) a smooth continuous precision (Eq 2) is more relevant. With or without softmax, the optimum is at infinity 1, which is in favor of methods having fast step-size adaptation or samplings with heavy-tail distributions. With an optimum at infinity, (Chotard et al., 2012) has shown how fast is the adaptation of the step-size when using cumulative step-size adaptation (as in CMA-ES), as opposed to slower rates for most methods. Cauchy sampling (Yao & Liu, 1996) in the (1 + 1)-ES is known for favoring fast changes; this is consistent with the superiority of Cauchy sampling in our setting compared to Gaussian sampling. Newuoa, Powell, SQP, Bayesian Optimization, Bayesian optimization are present in Nevergrad but they have an expensive (budget consumption linear is linear w.r.t. the dimension) initial sampling stage which is not possible in our high-dimensional / moderate budget context. The targeted case needs more precision and favors algorithms such as Diagonal CMA-ES which adapt a step-size per coordinate whereas the untargeted case is more in favor of fast random exploration such as the (1 + 1)-ES. Compared to Diagonal-CMA, CMA with full covariance might be too slow; given a number of queries (rather than a time budget) it is however optimal for high precision. 3.4 THE TILING TRICK Ilyas et al. (2018b) suggested to tile the attack to lower the number of queries necessary to fool the network. Concretely, they observe that the gradient coordinates are correlated for close pixels in the images, so they suggested to add the same noise for small square tiles in the image (see Fig. 1). We exploit the same trick since it reduces the dimensionality of the search space, and makes hence evolutionary strategies suited to the problem at hand. Besides breaking the curse of dimensionality, tiling leads surprisingly to a new property that we discovered during our experiments. At a given tiling scale, convolutional neural networks are not robust to random noise. Section 4.2 is devoted to this intriguing property. Interestingly enough, initializing our optimization algorithms with a tiled noise at the appropriate scale drastically speeds up the convergence, leading to a reduced number of queries. 1i.e. the optima of the ball constrained problem 1, would be close to the boundary or on the boundary of the `∞ ball. In that case, the optimum of the continuous problem 2 will be at ∞ or close to it. On the discrete case 4 it is easy to see that the optimum is when ai or bi → ∞. 4 EXPERIMENTS 4.1 GENERAL SETTING AND IMPLEMENTATION DETAILS We compare our approach to the “bandits” method (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019). The latter (parsimonious attack) is, to the best of our knowledge, the state of the art in the black-box setting from the literature; bandits method is also considered in our benchmark given its ties to our models. We reproduced the results from (Moon et al., 2019) in our setting for fair comparison. As explained in section 3.2, our attacks can be interpreted as `∞ ones. We use the large-scale ImageNet dataset (Deng et al., 2009). As usually done in most frameworks, we quantify our success in terms of attack success rate, median queries and average queries. Here, the number of queries refers to the number of requests to the output logits of a classifier for a given image. For the success rate, we only consider the images that were correctly classified by our model. We use InceptionV3 (Szegedy et al., 2017) , VGG16 (Simonyan & Zisserman, 2014) with batch normalization (VGG16bn) and ResNet50 (He et al., 2016) architectures to measure the performance of our algorithm on the ImageNet dataset. These models reach accuracy close to the the state of the art with around 75 − 80% for the Top-1 accuracy and 95% for the Top-5 accuracy. We use pretrained models from PyTorch (Paszke et al., 2017). All images are normalized to [0, 1]. Results on VGG16bn and ResNet50 are deferred in supplementary material E. The images to be attacked are selected at random. We first show that convolutional networks are not robust to tiled random noise, and more surprisingly that there exists an optimal tile size that is the same for all architectures and noise intensities. Then, we evaluate our methods on both targeted and untargeted objectives. We considered the following losses: the cross entropy L(f(x), y) = − log(P(y|x)) and a loss inspired from the “Carlini&Wagner” attack: L(f(x), y) = −P(y|x) + maxy′ 6=y P(y′|x) where P(y|x) = [Softmax(f(x))]y , the probability for the classifier to classify the input x to label y. The results for the second loss are deferred in supplementary material C. For all our attacks, we use the Nevergrad (Rapin & Teytaud, 2018) implementation of evolution strategies. We did not change the default parameters of the optimization strategies. 4.2 CONVOLUTIONAL NEURAL NETWORKS ARE NOT ROBUST TO TILED RANDOM NOISE In this section, we highlight that neural neural networks are not robust to `∞ tiled random noise. A noise on an image is said to be tiled if the added noise on the image is the same on small squares of pixels (see Figure 2). In practice, we divide our image in equally sized tiles. For each tile, we add to the image a randomly chosen constant noise: + with probability 12 and − with probability 1 2 , uniformly on the tile. The tile trick has been introduced inIlyas et al. (2018a) for dimensionality reduction. Here we exhibit a new behavior that we discovered during our experiments. As shown in Fig. 1 for reasonable noise intensity ( = 0.05), the success rate of a one shot randomly tiled attack is quite high. This fact is observed on many neural network architectures. We compared the number of tiles since the images input size are not the same for all architectures (299 × 299 × 3 for InceptionV3 and 224 × 224 × 3 for VGG16bn and ResNet50). The optimal number of tiles (in the sense of attack success rate) is, surprisingly, independent from the architecture and the noise intensity. We also note that the InceptionV3 architecture is more robust to random tiled noise than VGG16bn and ResNet50 architectures. InceptionV3 blocks are parallel convolutions with different filter sizes that are concatenated. Using different filter sizes may attenuate the effect of the tiled noise since some convolution sizes might be less sensitive. We test this with a single random attack with various numbers of tiles (cf. Figure 1, 2). We plotted additional graphs in supplementary material B. 4.3 UNTARGETED ADVERSARIAL ATTACKS We first evaluate our attacks in the untargeted setting. The aim is to change the predicted label of the classifier. Following (Moon et al., 2019; Ilyas et al., 2018b), we use 10, 000 images that are initially correctly classified and we limit the budget to 10, 000 queries. We experimented with 30 and 50 tiles on the images. Only the best performing methods are reported in Table 1. We compare our results with (Moon et al., 2019) and (Ilyas et al., 2018b) on InceptionV3 (cf. Table 1). We also plotted the cumulative success rate in terms of required budget in Figure 3. We also evaluated our attacks for smaller noise in supplementary material D. We achieve results outperforming or at least equal to the state of the art in all cases. More remarkably, We improve by far the number of necessary queries to fool the classifiers. The tiling trick partially explains why the average and the median number of queries are low. Indeed, the first queries of our evolution strategies is in general close to random search and hence, according to the observation of Figs 1-2, the first steps are more likely to fool the network, which explains why the queries budget remains low. This Discrete strategies reach better median numbers of queries - which is consistent as we directly search on the limits of the `∞-ball; however, given the restricted search space (only corners of the search space are considered), the success rate is lower and on average the number of queries increases due to hard cases. 4.4 TARGETED ADVERSARIAL ATTACKS We also evaluate our methods in the targeted case on ImageNet dataset. We selected 1, 000 images, correctly classified. Since the targeted task is harder than the untargeted case, we set the maximum budget to 100, 000 queries, and = 0.05. We uniformly chose the target class among the incorrect ones. We evaluated our attacks in comparison with the bandits methods (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019) on InceptionV3 classifier. We also plotted the cumulative success rate in terms of required budget in Figure 3. CMA-ES beats the state of the art on all criteria. DiagonalCMA-ES obtains acceptable results but is less powerful that CMA-ES in this specific case. The classical CMA optimizer is more precise, even if the run time is much longer. Cauchy (1 + 1)- ES and discretized optimization reach good results, but when the task is more complicated they do not reach as good results as the state of the art in black box targeted attacks. 4.5 UNTARGETED ATTACKS AGAINST AN ADVERSARIALLY TRAINED NETWORK In this section, we experiment our attacks against a defended network by adversarial training (Goodfellow et al., 2015). Since adversarial training is computationally expensive, we restricted ourselves to the CIFAR10 dataset (Krizhevsky et al., 2009) for this experiment. Image size is 32 × 32 × 3. We adversarially trained a WideResNet28x10 (Zagoruyko & Komodakis, 2016) with PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 10 steps of size 2/256. In this setting, we randomly selected 1, 000 images, and limited the budget to 20, 000 queries. We ran PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 20 steps of size 1/256 against our network, and achieved a success rate up to 36%, which is the the state of the art in the white box setting. We also compared our method to the Parsimonious and bandit attacks. Results are reported in Appendix 6. On this task, the parsimonious attack method is slightly better than our best approach. 5 CONCLUSION In this paper, we proposed a new framework for crafting black box adversarial attacks based on derivative free optimization. Because of the high dimensionality and the characteristics of the problem (see Section 3.3.2), not all optimization strategies give satisfying results. However, combined with the tiling trick, evolutionary strategies such as CMA, DiagonalCMA and Cauchy (1+1)-ES beats the current state of the art in both targeted and untargeted settings. In particular, DFOc−CMA improves the state of the art in terms of success rate in almost all settings. We also validated the robustness of our attack against an adversarially trained network. Future work will be devoted to better understanding the intriguing property of the effect that a neural network is not robust to a one shot randomly tiled attack. A ALGORITHMS A.1 THE (1+1)-ES ALGORITHM Algorithm 1 The (1 + 1) Evolution Strategy. Require: Function f : Rd → R to minimize m← 0, C ← Id, σ ← 1 for t = 1...n do (Generate candidates) Generate m′ ∼ m+ σX where X is sampled from a Cauchy or Gaussian distribution. if f(m′) ≤ f(m) then m← m′, σ ← 2σ else σ ← 2− 14σ end if end for A.2 CMA-ES ALGORITHM Algorithm 2 CMA-ES algorithm. The T subscript denotes transposition. Require: Function f : Rd → R to minimize, parameters b, c, w1 > . . . , wµ > 0, pc and others as in e.g. (Hansen & Ostermeier, 2003). m← 0, C ← Id, σ ← 1 for t = 1...n do Generate x1, ..., xλ ∼ m+ σN (0, C). Define x′i the i th best of the xi. Update the cumulation for C: pc ← cumulation of pc, overall direction of progress. Update the covariance matrix: C ← (1− c) C︸︷︷︸ inertia + c b (pc × pTc )︸ ︷︷ ︸ overall direction +c(1− 1 b ) µ∑ i=1 wi x′i −m σ × (x ′ i −m)T σ︸ ︷︷ ︸ “covariance” of the 1σx ′ i Update mean: m← µ∑ i=1 wixi:λ Update σ by cumulative step-size adaptation (Arnold & Beyer, 2004). end for B ADDITIONAL PLOTS FOR THE TILING TRICK C RESULTS WITH “CARLINI&WAGNER” LOSS In this section, we follow the same experimental setup as in Section 4.3, but we built our attacks with the “Carlini&Wagner” loss instead of the cross entropy. We remark the results are comparable and similar. D UNTARGETED ATTACKS WITH SMALLER NOISE INTENSITIES We evaluated our method on smaller noise intensities ( ∈ {0.01, 0.03, 0.05}) in the untargeted setting on ImageNet dataset. In this framework, we also picked up randomly 10, 000 images and limited our budget to 10, 000 queries. We compared to the bandits method (Ilyas et al., 2018b) and to the parsimonious attack (Moon et al., 2019) on InceptionV3 network. We limited our experiments to a number of tiles of 50. We report our results in Table 4. We remark our attacks reach state of the art for = 0.03 and = 0.05 both in terms of success rate and queries budget. For = 0.01, we reach results comparable to the state of the art. E UNTARGETED ATTACKS AGAINST OTHER ARCHITECTURES We also evaluated our method on different neural networks architectures. For each network we randomly selected 10, 000 images that were correctly classified. We limit our budget to 10, 000 queries and set the number of tiles to 50. We achieve a success attack rate up to 100% on every classifier with a budget as low as 8 median queries for the VGG16bn for instance (see Table 5). One should notice that the performances are lower on InceptionV3 as it is also reported for the bandit methods in (Ilyas et al., 2018b). This possibly due to the fact that the tiling trick is less relevant on the Inception network than on the other networks (see Fig. 2). F TABLE FOR ATTACKS AGAINST ADVERSARIALLY TRANINED NETWORK G FAILING METHODS In this section, we compare our attacks to other optimization strategies. We run our experiments in the same setup as in Section 4.3. Results are reported in Table 7. DE and Normal (1+1)-ES performs poorly, probably because these optimization strategies converge slower when the optima are at “infinity”. We reformulate this sentence accordingly in the updated version of the paper. Finally, as the initialization of Powell is linear with the dimension and with less variance, it performs poorer than simple random search. Newuoa, SQP and Cobyla algorithms have also been tried on a smaller number images (we did not report the results), but their initialization is also linear in the dimension, so they reach very poor results too.
1. What are the strengths and weaknesses of the proposed DFO framework for generating black-box adversarial examples? 2. How does the reviewer assess the comparison between the proposed approach and other baseline methods, specifically those using zeroth-order optimization? 3. What additional evaluations or comparisons does the reviewer suggest to provide a clearer picture of the attack's performance and efficiency? 4. How does the reviewer view the tradeoff between query efficiency and perturbation power, and what further demonstrations would help clarify this aspect?
Review
Review This paper proposed a DFO framework to generate black-box adversarial examples. By comparing with Parsimonious and Bandits, the proposed approach achieves lower query complexity and higher attack success rate (ASR). I have two main concerns about the current version: 1) Some important baselines might be missing. In addition to (Ilyas et al., 2018b) and (Moon et al., 2019), the methods built on zeroth-order optimization (namely, gradient estimation via function differences) were not compared. Examples include [1] There are No Bit Parts for Sign Bits in Black-Box Attacks [2] AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks [3] SIGNSGD VIA ZEROTH-ORDER ORACLE 2) In addition to attack success rate and query complexity, it might be useful to compare different attacks in terms of $\ell_p$ distortion, where $p \neq \infty$. This could provide a clearer picture on whether or not the query efficiency and the attack performance are at the cost of increasing the $\ell_1$ and $\ell_2$ distortion significantly. ########### Post-feedback ############## Thanks for the response and the additional experiments to address my first question. However, I am not satisfied with the response "But clearly our methods aim to reach the boundary of linf ball, so the distortion might be large" to the second question. I am Okay with the design of $\ell_\infty$ attack. However, if the reduction in query complexity is at a large cost of perturbation power, e.g., measured by $\ell_2$ norm, then it is better to demonstrate this tradeoff. Furthermore, if the $\ell_2$ norm is constrained, will the proposed $\ell_\infty$ attack outperform the others? This is also not clear to me. Thus, I decide to keep my score.
ICLR
Title Yet another but more efficient black-box adversarial attack: tiling and evolution strategies Abstract We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from `∞-white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to 10, 000 queries, results up to 99.2% of success rate against InceptionV3 classifier with 630 queries to the network on average in the untargeted attacks setting, which is an improvement by 90 queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of 100, 000, 100% of success rate with a budget of 6, 662 queries on average, i.e. we need 800 queries less than the current state of the art. 1 INTRODUCTION Despite their success, deep learning algorithms have shown vulnerability to adversarial attacks (Biggio et al., 2013; Szegedy et al., 2014), i.e. small imperceptible perturbations of the inputs, that lead the networks to misclassify the generated adversarial examples. Since their discovery, adversarial attacks and defenses have become one of the hottest research topics in the machine learning community as serious security issues are raised in many critical fields. They also question our understanding of deep learning behaviors. Although some advances have been made to explain theoretically (Fawzi et al., 2016; Sinha et al., 2017; Cohen et al., 2019; Pinot et al., 2019) and experimentally (Goodfellow et al., 2015; Xie et al., 2018; Meng & Chen, 2017; Samangouei et al., 2018; Araujo et al., 2019) adversarial attacks, the phenomenon remains misunderstood and there is still a gap to come up with principled guarantees on the robustness of neural networks against maliciously crafted attacks. Designing new and stronger attacks helps building better defenses, hence the motivation of our work. First attacks were generated in a setting where the attacker knows all the information of the network (architecture and parameters). In this white box setting, the main idea is to perturb the input in the direction of the gradient of the loss w.r.t. the input (Goodfellow et al., 2015; Kurakin et al., 2016; Carlini & Wagner, 2017; Moosavi-Dezfooli et al., 2016). This case is unrealistic because the attacker has only limited access to the network in practice. For instance, web services that propose commercial recognition systems such as Amazon or Google are backed by pretrained neural networks. A user can query this system by sending an image to classify. For such a query, the user only has access to the inference results of the classifier which might be either the label, probabilities or logits. Such a setting is coined in the literature as the black box setting. It is more realistic but also more challenging from the attacker’s standpoint. As a consequence, several works proposed black box attacks by just querying the inference results of a given classifier. A natural way consists in exploiting the transferability of an adversarial attack, based on the idea that if an example fools a classifier, it is more likely that it fools another one (Papernot et al., 2016a). In this case, a white box attack is crafted on a fully known classifier. Papernot et al. (2017) exploited this property to derive practical black box attacks. Another approach within the black box setting consists in estimating the gradient of the loss by querying the classifier (Chen et al., 2017; Ilyas et al., 2018a;b). For these attacks, the PGD attack (Kurakin et al., 2016; Madry et al., 2018a) algorithm is used and the gradient is replaced by its estimation. In this paper, we propose efficient black box adversarial attacks using stochastic derivative free optimization (DFO) methods with only access to the logits of the classifier. By efficient, we mean that our model requires a limited number of queries while outperforming the state of the art in terms of attack success rate. At the very core of our approach is a new objective function particularly designed to suit classical derivative free optimization. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. It leverages results and ideas from `∞-attacks. We also explore a large spectrum of evolution strategies and other derivative-free optimization methods thanks to the Nevergrad framework (Rapin & Teytaud, 2018). Outline of the paper. We present in Section 2 the related work on adversarial attacks. Section 3 presents the core of our approach. We introduce a new generic objective function and discuss two practical instantiations leading to a discrete and a continuous optimization problems. We then give more details on the best performing derivative-free optimization methods, and provide some insights on our models and optimization strategies. Section 4 is dedicated to a thorough experimental analysis, where we show we reach state of the art performances by comparing our models with the most powerful black-box approaches on both targeted and untargeted attacks. We also assess our models against the most efficient so far defense strategy based on adversarial training. We finally conclude our paper in Section 5. 2 RELATED WORK Adversarial attacks have a long standing history in the machine learning community. Early works appeared in the mid 2000’s where the authors were concerned about Spam classification (Biggio et al., 2009). Szegedy et al. (2014) revives this research topic by highlighting that deep convolutional networks can be easily fooled. Many adversarial attacks against deep neural networks have been proposed since then. One can distinguish two classes of attacks: white box and black box attacks. In the white box setting, the adversary is supposed to have full knowledge of the network (architecture and parameters), while in the black box one, the adversary only has limited access to the network: she does not know the architecture, and can only query the network and gets labels, logits or probabilities from her queries. An attack is said to have suceeded (we also talk about Attack Success Rate), if the input was originally well classified and the generated example is classified to the targeted label. The white box setting attracted more attention even if it is the more unrealistic between the two. The attacks are crafted by by back-propagating the gradient of the loss function w.r.t. the input. The problem writes as a non-convex optimization procedure that either constraints the perturbation or aims at minimizing its norm. Among the most popular ones, one can cite FGSM (Goodfellow et al., 2015), PGD (Kurakin et al., 2016; Madry et al., 2018a), Deepfool (Moosavi-Dezfooli et al., 2016), JSMA (Papernot et al., 2016b), Carlini&Wagner attack (Carlini & Wagner, 2017) and EAD (Chen et al., 2018). The black box setting is more realistic, but also more challenging. Two strategies emerged in the literature to craft attacks within this setting: transferability from a substitute network, and gradient estimation algorithms. Transferability has been pointed out by Papernot et al. (2017). It consists in generating a white-box adversarial example on a fully known substitute neural network, i.e. a network trained on the same classification task. This crafted adversarial example can be transferred to the targeted unknown network. Leveraging this property, Moosavi-Dezfooli et al. (2017) proposed an algorithm to craft a single adversarial attack that is the same for all examples and all networks. Despite the popularity of these methods, gradient estimation algorithms outperform transferability methods. Chen et al. (2017) proposed a variant of the powerful white-box attack introduced in (Carlini & Wagner, 2017), based on gradient estimation with finite differences. This method achieves good results in practice but requires a high number of queries to the network. To reduce the number of queries, Ilyas et al. (2018a) proposed to rely rather on Natural Evolution Strategies (NES). These derivative-free optimization approaches consist in estimating the parametric distribution of the min- ima of a given objective function. This amounts for most of NES algorithms to perform a natural gradient descent in the space of distributions (Ollivier et al., 2017). In (Al-Dujaili & O’Reilly, 2019), the authors propose to rather estimate the sign of the gradient instead of estimating the its magnitude suing zeroth-order optimization techniques. They show further how to reduce the search space from exponential to linear. The achieved results were state of the art at the publication date. In Liu et al. (2019), the authors introduced a zeroth-order version of the signSGD algorithm, studied its convergence properties and showed its efficiency in crafting adversarial black-box attacks. The results are promising but fail to beat the state of the art. In Tu et al. (2019), the authors introduce the AutoZOOM framework combining gradient estimation and an auto-encoder trained offline with unlabeled data. The idea is appealing but requires training an auto-encoder with an available dataset, which an additional effort for the attacker. Besides, this may be unrealistic for several use cases. More recently, Moon et al. (2019) proposed a method based on discrete and combinatorial optimization where the perturbations are pushed towards the corners of the `∞ ball. This method is to the best of our knowledge the state of the art in the black box setting in terms of queries budget and success rate. We will focus in our experiments on this method and show how our approaches achieve better results. Several defense strategies have been proposed to diminish the impact of adversarial attacks on networks accuracies. A basic workaround, introduced in (Goodfellow et al., 2015), is to augment the learning set with adversarial attacks examples. Such an approach is called adversarial training in the literature. It helps recovering some accuracy but fails to fully defend the network, and lacks theoretical guarantees, in particular principled certificates. Defenses based on randomization at inference time were also proposed (Lecuyer et al., 2018; Cohen et al., 2019; Pinot et al., 2019). These methods are grounded theoretically, but the guarantees cannot ensure full protection against adversarial examples. The question of defenses and attacks is still widely open since our understanding of this phenomenon is still in its infancy. We evaluate our approach against adversarial training, the most powerful defense method so far. 3 METHODS 3.1 GENERAL FRAMEWORK Let us consider a classification task X 7→ [K] where X ⊆ Rd is the input space and [K] = {1, ...,K} is the corresponding label set. Let f : Rd → RK be a classifier (a feed forward neural network in our paper) from an input space X returning the logits of each label in [K] such that the predicted label for a given input is argmaxi∈[K] fi(x). The aim of ||.||∞-bounded untargeted adversarial attacks is, for some input x with label y, to find a perturbation τ such that argmaxi∈[K] fi(x) 6= y. Classically, ||.||∞-bounded untargeted adversarial attacks aims at optimizing the following objective: max τ :||τ ||∞≤ L(f(x+ τ), y) (1) where L is a loss function (typically the cross entropy) and y the true label. For targeted attacks, the attacker targets a label yt by maximizing −L(f(x+ τ), yt). With access to the gradients of the network, gradient descent methods have proved their efficiency (Kurakin et al., 2016; Madry et al., 2018a). So far, the outline of most black box attacks was to estimate the gradient using either finite differences or natural evolution strategies. Here using evolutionary strategies heuristics, we do not want to take care of the gradient estimation problem. 3.2 TWO OPTIMIZATION PROBLEMS In some DFO approaches, the default search space is Rd. In the `∞ bounded adversarial attacks setting, the search space isB∞( ) = {τ : ||τ ||∞ ≤ }. It requires to adapt the problem in Eq 1. Two variants are proposed in the sequel leading to continuous and discretized versions of the problem. The continuous problem. As in Carlini & Wagner (2017), we use the hyperbolic tangent transformation to restate our problem since B∞( ) = tanh (Rd). This leads to a continuous search space on which evolutionary strategies apply. Hence our optimization problem writes: max τ∈Rd L(f(x+ tanh(τ)), y). (2) We will call this problem DFOc− optimizer where optimizer is the used black box derivative free optimization strategy. The discretized problem. Moon et al. (2019) pointed out that PGD attacks (Kurakin et al., 2016; Madry et al., 2018b) are mainly located on the corners of the `∞-ball. They consider optimizing the following max τ∈{− ,+ }d L(f(x+ τ), y). (3) The author in (Moon et al., 2019) proposed a purely discrete combinatorial optimization to solve this problem (Eq. 3). As in Bello et al. (2017), we here consider how to automatically convert an algorithm designed for continuous optimization to discrete optimization. To make the problem in Eq. 3 compliant with our evolutionary strategies setting, we rewrite our problem by considering a stochastic function f(x + τ) where, for all i, τi ∈ {−1,+1} and P(τi = 1) = Softmax(ai, bi) = eai eai+ebi . Hence our problem amounts to find the best parameters ai and bi that optimize: min a,b Eτ∼Pa,b(L(f(x+ τ), y) (4) We then rely on evolutionary strategies to find the parameters a and b. As the optima are deterministic, the optimal values for a and b are at infinity. Some ES algorithms are well suited to such setting as will be discussed in the sequel. We will call this problem DFOd− optimizer where optimizer is the used black box derivative free optimization strategy for a and b. In this case, one could reduce the problem to one variale ai with P(τi = 1) = 11+e−ai , but experimentally the results are comparable, so we concentrate on Problem 4. 3.3 DERIVATIVE-FREE OPTIMIZATION METHODS Derivative-free optimization methods are aimed at optimizing an objective function without access to the gradient. There exists a large and wide literature around derivative free optimisation. In this setting, one algorithm aims to minimize some function f on some space X . The only thing that could be done by this algorithm is to query for some points x the value of f(x). As evaluating f can be computationally expensive, the purpose of DFO methods is to get a good approximation of the optima using a moderate number of queries. We tested several evolution strategies (Rechenberg, 1973; Beyer, 2001): the simple (1+ 1)-algorithm (Matyas, 1965; Schumer & Steiglitz, 1968), Covariance Matrix Adaptation (CMA (Hansen & Ostermeier, 2003)). For these methods, the underlying algorithm is to iteratively update some distribution Pθ defined on X . Roughly speaking, the current distribution Pθ represents the current belief of the localization of the optimas of the goal function. The parameters are updated using objective function values at different points. It turns out that this family of algorithms, than can be reinterpreted as natural evolution strategies, perform best. The two best performing methods will be detailed in Section 3.3.1; we refer to references above for other tested methods. 3.3.1 OUR BEST PERFORMING METHODS: EVOLUTION STRATEGIES The (1 + 1)-ES algorithm. The (1 + 1)-evolution strategy with one-fifth rule (Matyas, 1965; Schumer & Steiglitz, 1968) is a simple but effective derivative-free optimization algorithm (in supplementary material, Alg. 1). Compared to random search, this algorithm moves the center of the Gaussian sampling according to the best candidate and adapts its scale by taking into account their frequency. Yao & Liu (1996) proposed the use of Cauchy distributions instead of classical Gaussian sampling. This favors large steps, and improves the results in case of (possibly partial) separability of the problem, i.e. when it is meaningful to perform large steps in some directions and very moderate ones in the other directions. CMA-ES algorithm. The Covariance Matrix Adaptation Evolution Strategy (Hansen & Ostermeier, 2003) combines evolution strategies (Beyer, 2001), Cumulative Step-Size Adaptation (Arnold & Beyer, 2004), and a specific method for adaptating the covariance matrix. An outline is provided in supplementary material, Alg. 2. CMA-ES is an effective and robust algorithm, but it becomes catastrophically slow in high dimension due to the expensive computation of the square root of the matrix. As a workaround, Ros & Hansen (2008) propose to approximate the covariance matrix by a diagonal one. This leads to a computational cost linear in the dimension, rather than the original quadratic one. Link with Natural Evolution Strategy (NES) attacks. Both (1+1)-ES and CMA-ES can be seen as an instantiation of a natural evolution strategy (see for instance Ollivier et al. (2017); Wierstra et al. (2014)). A natural evolution strategy consists in estimating iteratively the distribution of the optima. For most NES approaches, a fortiori CMA-ES, the iterative estimation consists in a second-order gradient descent (also known as natural gradient) in the space of distributions (e.g. Gaussians). (1+1)-ES can also be seen as a NES, where the covariance matrix is restricted to be proportional to the identity. Note however that from an algorithmic perspective, both CME-ES and (1+1)-ES optimize the quantile of the objective function. 3.3.2 HYPOTHESES FOR DFO METHODS IN THE ADVERSARIAL ATTACKS CONTEXT The state of the art in DFO and intuition suggest the followings. Using softmax for exploring only points in the corner (Eq. 3) is better for moderate budget, as corners are known to be good adversarial candidates; however, for high precision attacks (with small τ ) a smooth continuous precision (Eq 2) is more relevant. With or without softmax, the optimum is at infinity 1, which is in favor of methods having fast step-size adaptation or samplings with heavy-tail distributions. With an optimum at infinity, (Chotard et al., 2012) has shown how fast is the adaptation of the step-size when using cumulative step-size adaptation (as in CMA-ES), as opposed to slower rates for most methods. Cauchy sampling (Yao & Liu, 1996) in the (1 + 1)-ES is known for favoring fast changes; this is consistent with the superiority of Cauchy sampling in our setting compared to Gaussian sampling. Newuoa, Powell, SQP, Bayesian Optimization, Bayesian optimization are present in Nevergrad but they have an expensive (budget consumption linear is linear w.r.t. the dimension) initial sampling stage which is not possible in our high-dimensional / moderate budget context. The targeted case needs more precision and favors algorithms such as Diagonal CMA-ES which adapt a step-size per coordinate whereas the untargeted case is more in favor of fast random exploration such as the (1 + 1)-ES. Compared to Diagonal-CMA, CMA with full covariance might be too slow; given a number of queries (rather than a time budget) it is however optimal for high precision. 3.4 THE TILING TRICK Ilyas et al. (2018b) suggested to tile the attack to lower the number of queries necessary to fool the network. Concretely, they observe that the gradient coordinates are correlated for close pixels in the images, so they suggested to add the same noise for small square tiles in the image (see Fig. 1). We exploit the same trick since it reduces the dimensionality of the search space, and makes hence evolutionary strategies suited to the problem at hand. Besides breaking the curse of dimensionality, tiling leads surprisingly to a new property that we discovered during our experiments. At a given tiling scale, convolutional neural networks are not robust to random noise. Section 4.2 is devoted to this intriguing property. Interestingly enough, initializing our optimization algorithms with a tiled noise at the appropriate scale drastically speeds up the convergence, leading to a reduced number of queries. 1i.e. the optima of the ball constrained problem 1, would be close to the boundary or on the boundary of the `∞ ball. In that case, the optimum of the continuous problem 2 will be at ∞ or close to it. On the discrete case 4 it is easy to see that the optimum is when ai or bi → ∞. 4 EXPERIMENTS 4.1 GENERAL SETTING AND IMPLEMENTATION DETAILS We compare our approach to the “bandits” method (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019). The latter (parsimonious attack) is, to the best of our knowledge, the state of the art in the black-box setting from the literature; bandits method is also considered in our benchmark given its ties to our models. We reproduced the results from (Moon et al., 2019) in our setting for fair comparison. As explained in section 3.2, our attacks can be interpreted as `∞ ones. We use the large-scale ImageNet dataset (Deng et al., 2009). As usually done in most frameworks, we quantify our success in terms of attack success rate, median queries and average queries. Here, the number of queries refers to the number of requests to the output logits of a classifier for a given image. For the success rate, we only consider the images that were correctly classified by our model. We use InceptionV3 (Szegedy et al., 2017) , VGG16 (Simonyan & Zisserman, 2014) with batch normalization (VGG16bn) and ResNet50 (He et al., 2016) architectures to measure the performance of our algorithm on the ImageNet dataset. These models reach accuracy close to the the state of the art with around 75 − 80% for the Top-1 accuracy and 95% for the Top-5 accuracy. We use pretrained models from PyTorch (Paszke et al., 2017). All images are normalized to [0, 1]. Results on VGG16bn and ResNet50 are deferred in supplementary material E. The images to be attacked are selected at random. We first show that convolutional networks are not robust to tiled random noise, and more surprisingly that there exists an optimal tile size that is the same for all architectures and noise intensities. Then, we evaluate our methods on both targeted and untargeted objectives. We considered the following losses: the cross entropy L(f(x), y) = − log(P(y|x)) and a loss inspired from the “Carlini&Wagner” attack: L(f(x), y) = −P(y|x) + maxy′ 6=y P(y′|x) where P(y|x) = [Softmax(f(x))]y , the probability for the classifier to classify the input x to label y. The results for the second loss are deferred in supplementary material C. For all our attacks, we use the Nevergrad (Rapin & Teytaud, 2018) implementation of evolution strategies. We did not change the default parameters of the optimization strategies. 4.2 CONVOLUTIONAL NEURAL NETWORKS ARE NOT ROBUST TO TILED RANDOM NOISE In this section, we highlight that neural neural networks are not robust to `∞ tiled random noise. A noise on an image is said to be tiled if the added noise on the image is the same on small squares of pixels (see Figure 2). In practice, we divide our image in equally sized tiles. For each tile, we add to the image a randomly chosen constant noise: + with probability 12 and − with probability 1 2 , uniformly on the tile. The tile trick has been introduced inIlyas et al. (2018a) for dimensionality reduction. Here we exhibit a new behavior that we discovered during our experiments. As shown in Fig. 1 for reasonable noise intensity ( = 0.05), the success rate of a one shot randomly tiled attack is quite high. This fact is observed on many neural network architectures. We compared the number of tiles since the images input size are not the same for all architectures (299 × 299 × 3 for InceptionV3 and 224 × 224 × 3 for VGG16bn and ResNet50). The optimal number of tiles (in the sense of attack success rate) is, surprisingly, independent from the architecture and the noise intensity. We also note that the InceptionV3 architecture is more robust to random tiled noise than VGG16bn and ResNet50 architectures. InceptionV3 blocks are parallel convolutions with different filter sizes that are concatenated. Using different filter sizes may attenuate the effect of the tiled noise since some convolution sizes might be less sensitive. We test this with a single random attack with various numbers of tiles (cf. Figure 1, 2). We plotted additional graphs in supplementary material B. 4.3 UNTARGETED ADVERSARIAL ATTACKS We first evaluate our attacks in the untargeted setting. The aim is to change the predicted label of the classifier. Following (Moon et al., 2019; Ilyas et al., 2018b), we use 10, 000 images that are initially correctly classified and we limit the budget to 10, 000 queries. We experimented with 30 and 50 tiles on the images. Only the best performing methods are reported in Table 1. We compare our results with (Moon et al., 2019) and (Ilyas et al., 2018b) on InceptionV3 (cf. Table 1). We also plotted the cumulative success rate in terms of required budget in Figure 3. We also evaluated our attacks for smaller noise in supplementary material D. We achieve results outperforming or at least equal to the state of the art in all cases. More remarkably, We improve by far the number of necessary queries to fool the classifiers. The tiling trick partially explains why the average and the median number of queries are low. Indeed, the first queries of our evolution strategies is in general close to random search and hence, according to the observation of Figs 1-2, the first steps are more likely to fool the network, which explains why the queries budget remains low. This Discrete strategies reach better median numbers of queries - which is consistent as we directly search on the limits of the `∞-ball; however, given the restricted search space (only corners of the search space are considered), the success rate is lower and on average the number of queries increases due to hard cases. 4.4 TARGETED ADVERSARIAL ATTACKS We also evaluate our methods in the targeted case on ImageNet dataset. We selected 1, 000 images, correctly classified. Since the targeted task is harder than the untargeted case, we set the maximum budget to 100, 000 queries, and = 0.05. We uniformly chose the target class among the incorrect ones. We evaluated our attacks in comparison with the bandits methods (Ilyas et al., 2018b) and the parsimonious attack (Moon et al., 2019) on InceptionV3 classifier. We also plotted the cumulative success rate in terms of required budget in Figure 3. CMA-ES beats the state of the art on all criteria. DiagonalCMA-ES obtains acceptable results but is less powerful that CMA-ES in this specific case. The classical CMA optimizer is more precise, even if the run time is much longer. Cauchy (1 + 1)- ES and discretized optimization reach good results, but when the task is more complicated they do not reach as good results as the state of the art in black box targeted attacks. 4.5 UNTARGETED ATTACKS AGAINST AN ADVERSARIALLY TRAINED NETWORK In this section, we experiment our attacks against a defended network by adversarial training (Goodfellow et al., 2015). Since adversarial training is computationally expensive, we restricted ourselves to the CIFAR10 dataset (Krizhevsky et al., 2009) for this experiment. Image size is 32 × 32 × 3. We adversarially trained a WideResNet28x10 (Zagoruyko & Komodakis, 2016) with PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 10 steps of size 2/256. In this setting, we randomly selected 1, 000 images, and limited the budget to 20, 000 queries. We ran PGD `∞ attacks (Kurakin et al., 2016; Madry et al., 2018a) of norm 8/256 and 20 steps of size 1/256 against our network, and achieved a success rate up to 36%, which is the the state of the art in the white box setting. We also compared our method to the Parsimonious and bandit attacks. Results are reported in Appendix 6. On this task, the parsimonious attack method is slightly better than our best approach. 5 CONCLUSION In this paper, we proposed a new framework for crafting black box adversarial attacks based on derivative free optimization. Because of the high dimensionality and the characteristics of the problem (see Section 3.3.2), not all optimization strategies give satisfying results. However, combined with the tiling trick, evolutionary strategies such as CMA, DiagonalCMA and Cauchy (1+1)-ES beats the current state of the art in both targeted and untargeted settings. In particular, DFOc−CMA improves the state of the art in terms of success rate in almost all settings. We also validated the robustness of our attack against an adversarially trained network. Future work will be devoted to better understanding the intriguing property of the effect that a neural network is not robust to a one shot randomly tiled attack. A ALGORITHMS A.1 THE (1+1)-ES ALGORITHM Algorithm 1 The (1 + 1) Evolution Strategy. Require: Function f : Rd → R to minimize m← 0, C ← Id, σ ← 1 for t = 1...n do (Generate candidates) Generate m′ ∼ m+ σX where X is sampled from a Cauchy or Gaussian distribution. if f(m′) ≤ f(m) then m← m′, σ ← 2σ else σ ← 2− 14σ end if end for A.2 CMA-ES ALGORITHM Algorithm 2 CMA-ES algorithm. The T subscript denotes transposition. Require: Function f : Rd → R to minimize, parameters b, c, w1 > . . . , wµ > 0, pc and others as in e.g. (Hansen & Ostermeier, 2003). m← 0, C ← Id, σ ← 1 for t = 1...n do Generate x1, ..., xλ ∼ m+ σN (0, C). Define x′i the i th best of the xi. Update the cumulation for C: pc ← cumulation of pc, overall direction of progress. Update the covariance matrix: C ← (1− c) C︸︷︷︸ inertia + c b (pc × pTc )︸ ︷︷ ︸ overall direction +c(1− 1 b ) µ∑ i=1 wi x′i −m σ × (x ′ i −m)T σ︸ ︷︷ ︸ “covariance” of the 1σx ′ i Update mean: m← µ∑ i=1 wixi:λ Update σ by cumulative step-size adaptation (Arnold & Beyer, 2004). end for B ADDITIONAL PLOTS FOR THE TILING TRICK C RESULTS WITH “CARLINI&WAGNER” LOSS In this section, we follow the same experimental setup as in Section 4.3, but we built our attacks with the “Carlini&Wagner” loss instead of the cross entropy. We remark the results are comparable and similar. D UNTARGETED ATTACKS WITH SMALLER NOISE INTENSITIES We evaluated our method on smaller noise intensities ( ∈ {0.01, 0.03, 0.05}) in the untargeted setting on ImageNet dataset. In this framework, we also picked up randomly 10, 000 images and limited our budget to 10, 000 queries. We compared to the bandits method (Ilyas et al., 2018b) and to the parsimonious attack (Moon et al., 2019) on InceptionV3 network. We limited our experiments to a number of tiles of 50. We report our results in Table 4. We remark our attacks reach state of the art for = 0.03 and = 0.05 both in terms of success rate and queries budget. For = 0.01, we reach results comparable to the state of the art. E UNTARGETED ATTACKS AGAINST OTHER ARCHITECTURES We also evaluated our method on different neural networks architectures. For each network we randomly selected 10, 000 images that were correctly classified. We limit our budget to 10, 000 queries and set the number of tiles to 50. We achieve a success attack rate up to 100% on every classifier with a budget as low as 8 median queries for the VGG16bn for instance (see Table 5). One should notice that the performances are lower on InceptionV3 as it is also reported for the bandit methods in (Ilyas et al., 2018b). This possibly due to the fact that the tiling trick is less relevant on the Inception network than on the other networks (see Fig. 2). F TABLE FOR ATTACKS AGAINST ADVERSARIALLY TRANINED NETWORK G FAILING METHODS In this section, we compare our attacks to other optimization strategies. We run our experiments in the same setup as in Section 4.3. Results are reported in Table 7. DE and Normal (1+1)-ES performs poorly, probably because these optimization strategies converge slower when the optima are at “infinity”. We reformulate this sentence accordingly in the updated version of the paper. Finally, as the initialization of Powell is linear with the dimension and with less variance, it performs poorer than simple random search. Newuoa, SQP and Cobyla algorithms have also been tried on a smaller number images (we did not report the results), but their initialization is also linear in the dimension, so they reach very poor results too.
1. What is the focus of the paper regarding black box adversarial attacks on deep neural networks? 2. What are the strengths and weaknesses of the experimental design and results? 3. How does the reviewer assess the technical soundness and novelty of the proposed approach? 4. What are some minor comments and questions raised by the reviewer regarding the paper's content?
Review
Review This paper proposes a black box adversarial attacks to deep neural networks. The proposed approaches consist of tiling technique proposed by Ilyas et al (2018) and derivative free approaches. The proposed approaches have been applied to targeted and untargeted adversarial attacks against modern neural network architectures such as VGG16, ResNet50, and InceptionV3 trained on ImageNet and CIFAR10 datasets. Experimental results show higher attack success rate with a smaller number of queries. The experimental results look quite promising, i.e., revealing the vulnerability of the deep neural network against black-box adversarial attacks. A possible weakness in the experimental design is that the authors haven't apply any defense methodology to the classification models to be attacked. Yet the results are promising. From the viewpoint of technical soundness, the approach is a simple combination of the existing approaches. The tiling technique is used in Ilyas et al (2018) combined with a bandit approach. The current paper simply replaces the bandit with evolution strategies. The introduction of the evolution strategies is motivated by their good performance as a zeroth order optimization algorithm. A small novelty appears in a way to handle a bounded search space. The authors claim that many DFO algorithms are designed for unbounded real search space and need some constraint handling. The authors proposed two ways of transforming the bounded search space to the unbounded real search space. However, there must be existing approaches for this type fo constraint (rectangle constraint) in DFO settings. I can not list such approaches here as there are huge number of papers addressing the constraint of this type. There is not enough discussion in the paper why these two proposed approaches are promising. Formulation (2) makes the problem ill-posed and technically the optimal point may not exist. Formulation (3) with softmax representation makes the optimization problem noisy, hence it may annoy the optimizer. Nonetheless, I believe the combination of these constraint handling technique and evolutionary approaches are not new. Some minor comments / questions below: P5: How are the original images to be attacked selected for Fig 2? P6: "we highlight that neural neural networks are not robust to l∞ tiled random noise. " Isn't it the contribution of (Ilyas et al., 2018b)? P7: What are the number of queries in Figure 3 and Table 1? Are they the number of queries spent until these algorithms found an adversarial example which is categorized to a wrong class for the first time?
ICLR
Title Generating Differentially Private Datasets Using GANs Abstract In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. 1 INTRODUCTION Following recent advancements in deep learning (Silver et al., 2016; He et al., 2015; Wu et al., 2016), more and more people and companies are interested in putting their data in use as they see that machine learning is able to generate a wide range of benefits, including financial, social, medical, security, and so on. At the same time, however, such models are often able to capture a fine level of detail in training data potentially compromising privacy of individuals who’s features sharply differ from others. This problem is partially mitigated by the use of regularisation techniques that “smooth out” outstanding details and avoid overfitting, but it does not give any theoretical privacy guarantees. Recent research by Fredrikson et al. (2015) suggests that even without access to internal model parameters, by using hill climbing on output probabilities of a neural network, it is possible to recover (up to a certain degree) individual faces from a training set. The latter result is especially disturbing knowing that deep learning models are becoming an integral part of our lives, making its way to phones, smart watches, cars, and appliances. And since these models are often trained on customers data, such training set recovery techniques will endanger privacy even without access to the manufacturer’s servers where these models are being trained. In order to protect privacy while still benefiting from the use of statistics and machine learning, a number of techniques for data anonymisation has been developed over the years, including kanonymity (Sweeney, 2002), l-diversity (Machanavajjhala et al., 2007), t-closeness (Li et al., 2007), and differential privacy (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014). The latter has been recognised as a strong standard and is widely accepted by the research community. We study the task of publishing datasets in a differentially private manner. In particular, we are interested in solving two problems. First, we want to be able to benefit from the use of machine learning by third parties while protecting sensitive information of individuals in our dataset. Second, we want to be sure that even if adversaries get access to the third-party model trained on our data, they would not be able to recover private information. An additional challenge is to be able to publish an entire dataset, as opposed to being required to use a query interface like in a typical differentially private framework. In this paper, we propose a simple solution to this problem. The main idea of our approach is to use generative adversarial networks (GANs) introduced in Goodfellow et al. (2014), trained with addition of Gaussian noise in the embedding space, to create artificial datasets that follow the same distribution as the real data while providing differential privacy guarantees. This method has a number of advantages over the methods proposed earlier. First of all, this solution is simple to implement, e.g. it does not require training ensembles of models on disjoint data. Second, it can be done on a user side, and not on the side of the machine learning service provider, which eliminates the necessity of trusting this service provider or implementing privacy-preserving models locally. Third, similarly to Abadi et al. (2016), privacy cannot be compromised even if the entire trained model is accessible to an adversary. Our contributions in this paper are the following: • we propose a novel mechanism for non-interactive differentially private data release, and to the best of our knowledge this is the first practical solution for complex real-world data; • we introduce a new technique of preserving privacy in neural networks via adding noise in the forward pass during training; • we show that this technique guarantees differential privacy for both the outputs and the learned weights of the network; • we demonstrate that we are able to achieve high accuracy in learning tasks while maintaining a reasonable (single-digit) privacy budget. The remainder of the paper is structured as follows. In Section 2, we give an overview of related work. Section 3 contains necessary background on differential privacy and generative adversarial networks. In Section 4, we describe our approach and provide its theoretical analysis and some practical aspects. Experimental results and implementation details are presented in Section 5, and Section 6 concludes the paper. The theorem proofs and additional details can be found in the Appendix. 2 RELATED WORK Given the level of attention to deep learning and the rising importance of privacy, it is unsurprising that there has been a significant increase in the number of publications on the topic of privacypreserving deep learning (and machine learning in general) in recent years. One take on the problem is to distribute training and use disjoint sets of training data. An example of such approach is the paper of Shokri & Shmatikov (2015), where they propose to train in a distributed manner by communicating sanitised updates from participants to a central authority. Such a method, however, yields high privacy losses as pointed out by Abadi et al. (2016) and Papernot et al. (2016). An alternative technique, also using disjoint training sets, suggested by Papernot et al. (2016), applies an ensemble of independently trained teacher models and semi-supervised knowledge transfer to a student model to achieve almost state-of-the-art (non-private) accuracy on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) with single-digit differential privacy bounds. This work was based on a paper by Hamm et al. (2016) and extends their method to generic learning models with any type of loss functions or optimisation algorithms. To the best of our knowledge, this is the most accurate privacy-preserving learning result to date, although one has to make sure that all the teaching ensemble and the aggregator are inaccessible to an adversary and the model is queried for teachers’ votes only a small number of times. A somewhat different approach is taken in Abadi et al. (2016). They suggest using differentially private stochastic gradient descent (for brevity, we will refer to it as DP-SGD in the remainder of the paper) to train deep learning models in a private manner. This approach allows to achieve high accuracy while maintaining low differential privacy bounds, and does not require distributed training. As stated above, our goal is to enable data usage by third party machine learning service providers to benefit from their expertise. All of the aforementioned methods, however, require every provider of such service to comply with the chosen privacy-preserving procedure which is not realistic. An alternative solution to this problem is to focus on sanitising data and making sure that training machine learning models on it would not compromise privacy. This direction is taken, for example, by Bindschaedler et al. (2017). The authors use a graphical probabilistic model to learn an underlying data distribution and transform real data points (seeds) into synthetic data points. Synthetic data is then filtered by a privacy test based on a plausible deniability criterion, which can be equivalent to differential privacy under certain conditions. Our approach, on the other hand, is to generate private data without requiring any real seeds. Thus, there is no need for privacy tests at the release stage, and the only requirement is that the generative model is privacy-preserving. By using GANs (Goodfellow et al., 2014) we ensure that our method is scalable and applicable to complex real-world data. 3 BACKGROUND This section gives a short introduction to GANs and differential privacy. Another important notion is the moments accountant method (Abadi et al., 2016) used to compute actual privacy bounds during training. However, since it is not essential for understanding the paper, we defer its description to the Appendix. 3.1 GENERATIVE ADVERSARIAL NETWORKS In recent years, generative adversarial networks (Goodfellow et al., 2014; Salimans et al., 2016) and its extensions, such as DCGAN (Radford et al., 2015) and EBGAN (Zhao et al., 2016), have received great attention and pushed the boundaries for deep generative models along with variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2015) and recursive neural networks (e.g. PixelRNN by Oord et al. (2016)). The most successful application for such generative models so far has been realistic image generation, perhaps due to abundance of training data and inherent geometric structure. In our work, we decided to choose GANs for several reasons. Firstly, GANs have shown very good results in practice, generating sharper images compared to other generative models. Secondly, the forward pass for generating data is much faster than that of, for instance, RNNs. Thirdly, the generator part of the model, the one we eventually interested in, does not interact with real training data at any point in the learning process, only getting gradients from the discriminator. In short, GANs can be described as follows. The model consists of two separate components: the generator G(z) and the discriminator D(x). The generator’s goal is to produce realistic samples of data based on a random variable z ∼ pz(z), while the discriminator is tasked with distinguishing real data samples x ∼ pdata(x) from generated samples x̂ ∼ pg(x). These two models are trained in an adversarial fashion, essentially playing a two-player game, with the goal to converge to the Nash equilibrium. Since training GANs in practice can be challenging, there is a number of commonly used tricks to improve convergence, such as using the Adam optimisation method (Kingma & Ba, 2015), feature matching, batch normalisation, and one-sided label smoothing (Salimans et al., 2016). We also observe improvements with adding labels to the discriminator (Odena, 2016) and unrolling discriminator updates (Metz et al., 2016). 3.2 DIFFERENTIAL PRIVACY The notion of differential privacy has been introduced and extended in a series of papers by Dwork et al. (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014), and is regarded as a strong privacy standard. It is defined for two adjacent datasets that differ by a single element: Definition 1. A randomized mechanismM : D → R with domain D and range R satisfies (ε, δ)differential privacy if for any two adjacent inputs d, d′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr [M(d) ∈ S] ≤ eε Pr [M(d′) ∈ S] + δ (1) Among the mechanisms to achieve differential privacy, two of the most widely used are Laplacian and Gaussian noise mechanisms. We are primarily interested in the latter, because of the improved privacy bounds analysis provided by the moments accountant method described in the Appendix. The Gaussian noise mechanism is defined as follows: M(d) , f(d) +N (0, s2f · σ2), (2) where sf is the sensitivity of f (i.e. sf = |f(d) − f(d′)| for f : D → R), and N (0, s2f · σ2) is the Gaussian distribution with the mean 0 and the standard deviation sfσ. 4 OUR APPROACH In this section, we describe our solution and provide a theoretical proof of privacy guarantees, as well as discuss limitations of the method. Let us begin with the formal problem statement. Problem Statement. Given the dataset X ∼ pdata(x), generate an artificial dataset X̃ = M(X) using the privacy mechanismM : X→ X, such that 1. it follows the same data distribution: X̃ ∼ pdata(x); 2. it provides differential privacy guarantees: Pr [M(X) ∈ S] ≤ eε Pr [M(X ′) ∈ S] + δ for any adjacent datasets X,X ′, and for any S ⊆ X. Here X = {X | X ∼ pdata(x)} is the space of all datasets formed by points drawn from the same distribution pdata(x). In most real-world problems, the true data distribution pdata(x) is unknown and needs to be estimated empirically. Since we are primarily interested in data synthesis, we will turn to generative models, and in particular we are going to use GANs as the mechanism to estimate pdata(x) and draw samples from it. If trained properly, GAN will provide a solution to the sub-problem (1). Despite the fact that the generator does not have access to the real dataX in the training process, one cannot guarantee differential privacy because of the information passed through with the gradients from the discriminator. A simple high level example will illustrate such breach of privacy. Let the datasets X,X ′ contain small real numbers. The only difference between these two datasets is the number x′ ∈ X ′, which happens to be extremely large. Since the gradients of the model depend on x′, one of the updates of the discriminator trained on X ′ may be very different from the rest, and this difference will the be propagated to the generator breaking privacy in general case. In order to maintain differential privacy guarantees, we propose the following solution. Proposition. Introduce a Gaussian noise layer in the discriminator network of GAN, so that its output, and therefore the weights of the trained generator, are differentially private with respect to the input data X . Use this generator to create a publishable differentially private dataset. The components of our solution are depicted in Figure 1. 4.1 THEORETICAL ANALYSIS OF THE APPROACH To validate the proposed solution, we first analyse it theoretically and show that the addition of a Gaussian noise layer in the discriminator network yields differential privacy in the generator. We will take the following steps to do that: 1. analyse privacy of the output of the noise layer w.r.t. the inputs X and X ′; 2. determine privacy bounds on the output of the whole network; 3. show that the same bounds hold for gradient updates. Let us start by describing the setting and notation used in the remainder of the section. We are given two adjacent datasets (X, y) and (X ′, y′) and a deterministic feed-forward neural network N with a Gaussian noise layer π. We denote the inputs of the layer π as xπ and x′π , and the outputs of the final layer of the network ŷ = N (X) and ŷ′ = N (X) correspondingly. To ensure (ε, δ)-differential privacy of π, the standard deviation of the noise has to be at least σ = C √ 2 log(1.25/δ)/ε, where C is the sensitivity of the preceding layer’s output xπ . Lemma 1. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . The proof of this lemma and the following Theorems 1 and 2 can be found in the appendix. Using Lemma 1, we are able demonstrate that the outputs of a feed-forward neural network with a Gaussian noise layer are differentially private with respect to the input data, which is expressed in the following theorem. Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Now, given that the forward pass is differentially private, we can formulate the main theoretical result of the paper: differential privacy of the gradients, and thus, the weights of the network N . Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Since we are interested in generating data using GANs, we will also need the following corollary to finalise the theoretical foundation for our framework. Corollary 1. (GANs) Given a generative adversarial network consisting of the generator G and the discriminator D with a privacy-preserving layer, gradient updates of G will have the same privacy bounds as gradient updates of D. Proof. This result trivially follows from Theorem 2 once we observe that generator updates are a function of discriminator updates. The above analysis is applicable for each individual iteration of the gradient descent, and privacy bounds on the final parameters can be obtained using composition theorems or a more efficient moments accountant method (Abadi et al., 2016). Note that Theorems 1 and 2 define differential privacy of the neural network with respect to the inputs X only, not taking into account the labels y. In certain cases, when labels of interest are already a public knowledge and do not reveal any information about data, it may be sufficient. However, if labels privacy is required, it is possible to incorporate it in the proposed approach in two ways. A first solution is to modify the learning problem so that labels become a part of data. For example, if one wants to train a face recognition model with privacy-breaking labels (e.g. specific names— John, Bob, Julia, etc.), it is possible to add these labels to X , and instead use True and False labels in y, indicating whether the input image and the input name correspond to each other. This way, label privacy will be handled by the same framework. Alternatively, one can use a separate privacy-preserving mechanism to retrieve labels during training. In this case, the eventual privacy w.r.t. the pair (X, y) may be derived from a composition of two mechanisms, which is shown in the theorem below. One possible candidate for such mechanism is the noisy voting scheme as used in Papernot et al. (2016). Theorem 3. (Private labels) Given a feed-forward neural network N with (ε1, δ1)–differentially private outputs ŷ, and the training labels ỹ satisfying (ε2, δ2)–differential privacy w.r.t. the true labels y, the gradient updates ω(i)X are (ε1+ε2, δ1+δ2)–differentially private with respect to (X, y) on each iteration i of gradient descent. Proof. There are two privacy mechanismsM1 andM2 applied to X and y correspondingly. Observe thatM1 does not have access to y, and thus, y cannot influence the output probabilities ofM1. The same is true forM2 and X . Consequently, we can assume that both mechanisms are applied to a pair (X, y). This allows us to employ a basic sequential composition theorem for differential privacy (Dwork & Lei, 2009) to obtain the privacy bounds. While it may appeal to use parallel composition instead of sequential composition to obtain a tighter bound, since X and y appear to be disjoint, it would be incorrect. The reason is that X and y are strongly correlated and breaking privacy of one can reveal the other. Alternatively, one could use advanced composition theorems (see e.g. Dwork et al. (2010); Kairouz et al. (2017)) to prove tighter privacy bounds, but it is not the goal of our paper. 4.2 PRACTICAL ASPECTS Based on the analysis above, we can do a number of important observations regarding applicability of this technique. First of all, the analysis is performed for feed-forward networks. Other architectures, such as RNNs, LSTMs, or memory networks, require additional investigation. Second, we focused on deterministic networks, meaning that the only two sources of stochasticity are data shuffling and privacypreserving noise layer π. Additional randomness in the network would complicate the proofs by introducing uncertainty in mappings. Third, conditions of Lemma 1 dictate that the network layers prior to π must preserve adjacency of the input. One layer breaking this condition is batch normalisation, because it introduces interdependencies between examples inside a batch, and just one different instance can change an entire batch. Summarising these limitations, the neural network under question must • be a feed-forward network; • not have randomised layers, e.g. dropout; • not have adjacency breaking layers before the privacy layer, e.g. batch normalisation. In the following section, we will touch upon some implications of it that affect practical performance. Note that these restrictions only apply to the network, in which we insert a privacypreserving layer, i.e. only the discriminator in our case. 5 EVALUATION In this section, we provide some implementation details and discuss evaluation results obtained on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) datasets. 5.1 EXPERIMENTAL SETUP We evaluate our solution as follows. First, we train a generative model on original datasets (using only training parts of each) with differential privacy by adding a Gaussian noise layer to the discriminator. We will call this model a teacher, analogously to Papernot et al. (2016). Then, we generate an artificial dataset of comparable size using the obtained model. Finally, we train a separate (nonprivate) classifier, which we call a student, on generated data and test it using held-out test sets. The last step is important from two perspectives: we can quantify the quality of generated samples as opposed to visual inspection typically done with GANs, and we can compare test errors to previously reported values. Note that there is no dependencies between the teacher and the student models. Moreover, student models are not constrained to neural networks and can be implemented as any type of machine learning algorithm. We choose two commonly used image classification datasets for our experiments: MNIST and SVHN. MNIST is a handwritten digit recognition dataset consisting of 60’000 training examples and 10’000 test examples, each example is a 28x28 size greyscale image. SVHN is also a digit recognition task, with 73’257 images for training and 26’032 for testing. The examples are coloured 32x32 pixel images of house numbers from Google Street View. 5.2 IMPLEMENTATION DETAILS Implementation was done in Python using Pytorch1. For generative model, we used a modified version of DCGAN by Radford et al. (2015). More specifically, the discriminator consists of five (four for MNIST) convolutional layers followed by leaky ReLU activations and a linear classifier with sigmoid output. We clip the output of the third convolutional layer (to ensure bounded sensitivity) and add Gaussian noise before passing it to the remaining convolutions with batch normalisation. The generator has two linear layers in front of five deconvolutions with batch normalisation and ReLU activations, ensued by fractional max pooling with tanh activation at the end. Both networks were trained using Adam optimiser (Kingma & Ba, 2015) with parameters typical for GAN training: learning rate set to 0.0002, β1 = 0.5, β2 = 0.999, and a batch size of 32. Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems. The student network is constructed of two convolutional layers with ReLU activations, batch normalisation and max pooling, followed by two fully connected layers with ReLU, and a softmax output layer. Again, training is performed by Adam algorithm. It is worth mentioning that this network does not achieve state-of-the-art performance on the used datasets, but we are primarily interested in evaluating the performance drop compared to a non-private model rather than getting the best test score. 5.3 DISCUSSION Using the experimental setup and implementation described above, we were able to get results close to Papernot et al. (2016) although not quite matching their accuracy for the same privacy bounds on SVHN. A performance gap is expected due to more generic nature of our method and a simpler privacy-preserving procedure. Overall, we managed to achieve 98.19% accuracy on MNIST and 83.49% accuracy on SVHN while maintaining approximately (3.45, 10−5) and (8, 10−6)- differential privacy. These numbers, along with the corresponding results of Papernot et al. (2016), 1http://pytorch.org can be found in Table 1. It is also worth noting that we did not perform rigorous hyper-parameter tuning due to limited computational resources; even better accuracy could be achieved have we had done that. Additionally, we trained a simple logistic regression model on MNIST, and obtained 88.96% accuracy on privately generated data compared to 92.58% on the original data, which confirms that any model can be used as a student. Examples of real and generated privacy-preserving images for MNIST and SVHN data are depicted on Figure 2. It can be seen that generated images don’t have the same contrast and dynamic range as real examples, which is not a problem in non-private GANs. We attribute it to the lack of batch normalisation in the discriminator. In addition to quantitative analysis of test errors and privacy bounds, we perform visual inspection of generated examples and corresponding nearest neighbours in real data. Figure 3 depicts a set of generated private examples and their nearest real counterparts. We observe that while some generated images are very close to real examples they don’t match exactly, differing either in shape, colour or surrounding digits. Moreover, a lot of pairs come from entirely different classes. 6 CONCLUSIONS We investigate the problem of non-interactive private data release with differential privacy guarantees. We employ generative adversarial networks to produce artificial privacy-preserving datasets. Contrary to existing privacy protection work in deep learning, this method allows to publish sanitised data and train any non-private models on it. The choice of GANs as a generative model ensures scalability and makes the technique suitable for real-world data with complex structure. Moreover, this method does not require running privacy tests on generated data before releasing it. Additionally, we introduce a novel method for preserving privacy of training data specific to deep neural networks based on adding noise in the embedding space during forward pass. It provides differential privacy guarantees and allows to construct privacy-preserving models in a simple and straightforward fashion, without modifying optimisation algorithms. In our experiments, we show that student models trained on artificial data can achieve high utility on MNIST dataset, while maintaining performance costs of added privacy and flexibility at acceptable levels on a more complicated SVHN data. Adding privacy directly to the trained model still provides better accuracy, and therefore, one of the possible directions for future work is to improve the quality of generated data for given privacy bounds. Extending presented technique and analysis to other types of deep neural networks provides another exciting opportunity for further research. 7 APPENDIX In this appendix, we state again and prove lemmas and theorems from Section 4.1. 7.1 PROOF OF LEMMA 1 Lemma 2. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . Proof. By definition of differential privacy: P [π(xπ) ∈ S] ≤ eεP [π(x′π) ∈ S] + δ, (3) for all adjacent xπ and x′π . We need to show that the same holds for all adjacent inputs X,X ′, i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ. Observe that we defined our network as deterministic (i.e. not having any randomness apart from initial data shuffling). Therefore, P [Xπ|X] = δxπ (Xπ), where δx(X) is a Dirac delta function. Conceptually, it means that the entire mass of the distribution of Xπ is concentrated on the point xπ . Using the above observation, P [π(X) ∈ S] = ∫ Xπ P [π(Xπ) ∈ S]P [Xπ|X] dXπ (4) = ∫ Xπ P [π(Xπ) ∈ S]δxπ (Xπ) dXπ (5) = P [π(xπ) ∈ S] (6) ≤ eεP [π(x′π) ∈ S] + δ (7) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ) δx′π (Xπ) dXπ (8) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ)P [Xπ|X ′] dXπ (9) =≤ eεP [π(X ′) ∈ S] + δ (10) Remark. Allowing randomised layers in the network would complicate the proof due to marginalisation over all possible outcomes Xπ corresponding to the input X . 7.2 PROOF OF THEOREM 1 Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Proof. Using the lemma above, we can show that outputs of the layer π are (ε, δ)-differentially private w.r.t. the inputs X , i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ (11) Since we require all the layers of N (except π) to be deterministic, there is a deterministic mapping from the outputs of π to ŷ. Let us denote this mapping f(π), and the preimage of a set S under this mapping f−1[S] (i.e. f−1[S] = {π : f(π) ∈ S}). Note that we treat X and X ′ as points in the space of all datasets X , and thus, π and f are not set-valued functions. Also, to avoid confusion, let us restate that f−1[S] is a preimage of a set S under f , and not a function inverse. Hence, we do not require f to be bijective, or even injective. Using the above, P [ŷ ∈ S] = P [f(π(X)) ∈ S] (12) = P [π(X) ∈ f−1[S]] (13) ≤ eεP [π(X ′) ∈ f−1[S]] + δ (14) = eεP [f(π(X ′)) ∈ S] + δ (15) = eεP [ŷ′ ∈ S] + δ, (16) for any pair of adjacent datasets X and X ′ (differing in one training example), thus, proving the theorem. 7.3 PROOF OF THEOREM 2 Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Proof. Let us denote by g(y, ŷ) = ∂L(y,ŷ)∂ω the gradient of the loss function w.r.t. network parameters. Similarly to Theorem 1, the preimage of a set T under g is denoted by g−1[y,T] = {ŷ : g(y, ŷ) ∈ T}. To better connect it with Theorem 1 let us define S = g−1[y,T]. Since gradient is a function of network outputs and labels, we have P [g(y, ŷ) ∈ T] = P [ŷ ∈ g−1[y,T]] = P [ŷ ∈ S]. (17) Combining the above results, P [ω (i) X ∈ T] = P [g(y, ŷ) ∈ T] (18) = P [ŷ ∈ S] (19) ≤ eεP [ŷ′ ∈ S] + δ (20) = eεP [g(y, ŷ′) ∈ T] + δ (21) = eεP [ω (i) X′ ∈ T] + δ, (22) for any pair of adjacent datasets X and X ′, demonstrating that weight updates stay (ε, δ)differentially private w.r.t to the input. 7.4 MOMENTS ACCOUNTANT The privacy bound produced by the strong composition theorem is often too loose, and therefore, we exploit the moments accountant technique developed by Abadi et al. (2016) for analysing their DP-SGD algorithm. To give the main idea of the method, let us start with defining the privacy loss. Definition 2. LetM : D → R be a randomized mechanism and d, d′ a pair of adjacent databases. Let aux denote an auxiliary input. For an outcome o ∈ R, the privacy loss at o is defined as: c(o;M,aux, d, d′) , log Pr [M(aux, d) = o] Pr [M(aux, d′) = o] . (23) And the privacy loss random variable C(M,aux, d, d′) is defined as c(M(d);M,aux, d, d′). The moments accountant is then defined as follows: Definition 3. Again, let M : D → R be a randomized mechanism, d, d′ a pair of adjacent databases, and aux denote an auxiliary input. The moments accountant is αM(λ) , max aux,d,d′ αM(λ;aux, d, d ′), (24) where αM(λ;aux, d, d′) , logE[exp(λC(M,aux, d, d′))] is a moment-generating function. In short, the moments accountant method tracks the bounds on the moments of the privacy loss random variable and then uses Markov inequality to obtain the tail bound on this random variable corresponding to the values of ε and δ.
1. What is the focus and contribution of the paper regarding non-interactive differentially private mechanisms? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. Do you have any concerns about the technical novelty of the paper? 4. How does the reviewer assess the effectiveness of the algorithm in high-dimensional settings? 5. Does the reviewer have any suggestions for improving the algorithm, such as incorporating assumptions about sparsity in the original dataset? 6. Is the reviewer questioning the novelty of Theorem 2, and if so, why?
Review
Review Summary: The paper addresses the problem of non-interactive differentially private mechanism via adversarial networks. Non-interactive mechanisms have been one of the most sought-after approaches in differentially private algorithm design. The reason is that once a differentially private data set is released, it can be used in any way to answer queries / perform learning tasks without worrying about the privacy budget. However, designing effective non-interactive mechanisms are notoriously hard because of strong computational lower bounds. In that respect, the problem addressed in this paper is extremely important, and the approach of using an adversarial network for the task is very natural (yet novel). The main idea in the paper is to set up a usual adversarial framework with the generator and the discriminator, where the discriminator has access to the raw data. The information (in the form of gradients) is passed from the discriminator on to the generator via a differentially private channel (using Gaussian mechanism). Positive aspects of the paper: One main positive aspect of the paper is that it comes up with a very simple yet effective approach for a non-interactive mechanism for differential privacy. Another positive aspect of the paper is that it is very well-written and is easy to follow. Questions: I have a few questions about the paper. 1. The technical novelty of the paper is not that high. Given the main idea of using a GAN, the algorithms and the experiments are fairly straightforward. I may be missing something. I believe the paper can be strengthened by placing more emphasis on the technical content. 2. I am mildly concerned about the effectiveness of the algorithm in the high dimensional setting. The norm of i.i.d. Gaussian noise scales roughly as \sqrt{dimensions}, which may be too much to tolerate in most settings. 3. I was wondering if there is a way to incorporate assumptions about sparsity in the original data set, to handle curse of dimensionality. 4. I am not sure about the novelty of Theorem 2. Isn't it just post-processing property of differential privacy?
ICLR
Title Generating Differentially Private Datasets Using GANs Abstract In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. 1 INTRODUCTION Following recent advancements in deep learning (Silver et al., 2016; He et al., 2015; Wu et al., 2016), more and more people and companies are interested in putting their data in use as they see that machine learning is able to generate a wide range of benefits, including financial, social, medical, security, and so on. At the same time, however, such models are often able to capture a fine level of detail in training data potentially compromising privacy of individuals who’s features sharply differ from others. This problem is partially mitigated by the use of regularisation techniques that “smooth out” outstanding details and avoid overfitting, but it does not give any theoretical privacy guarantees. Recent research by Fredrikson et al. (2015) suggests that even without access to internal model parameters, by using hill climbing on output probabilities of a neural network, it is possible to recover (up to a certain degree) individual faces from a training set. The latter result is especially disturbing knowing that deep learning models are becoming an integral part of our lives, making its way to phones, smart watches, cars, and appliances. And since these models are often trained on customers data, such training set recovery techniques will endanger privacy even without access to the manufacturer’s servers where these models are being trained. In order to protect privacy while still benefiting from the use of statistics and machine learning, a number of techniques for data anonymisation has been developed over the years, including kanonymity (Sweeney, 2002), l-diversity (Machanavajjhala et al., 2007), t-closeness (Li et al., 2007), and differential privacy (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014). The latter has been recognised as a strong standard and is widely accepted by the research community. We study the task of publishing datasets in a differentially private manner. In particular, we are interested in solving two problems. First, we want to be able to benefit from the use of machine learning by third parties while protecting sensitive information of individuals in our dataset. Second, we want to be sure that even if adversaries get access to the third-party model trained on our data, they would not be able to recover private information. An additional challenge is to be able to publish an entire dataset, as opposed to being required to use a query interface like in a typical differentially private framework. In this paper, we propose a simple solution to this problem. The main idea of our approach is to use generative adversarial networks (GANs) introduced in Goodfellow et al. (2014), trained with addition of Gaussian noise in the embedding space, to create artificial datasets that follow the same distribution as the real data while providing differential privacy guarantees. This method has a number of advantages over the methods proposed earlier. First of all, this solution is simple to implement, e.g. it does not require training ensembles of models on disjoint data. Second, it can be done on a user side, and not on the side of the machine learning service provider, which eliminates the necessity of trusting this service provider or implementing privacy-preserving models locally. Third, similarly to Abadi et al. (2016), privacy cannot be compromised even if the entire trained model is accessible to an adversary. Our contributions in this paper are the following: • we propose a novel mechanism for non-interactive differentially private data release, and to the best of our knowledge this is the first practical solution for complex real-world data; • we introduce a new technique of preserving privacy in neural networks via adding noise in the forward pass during training; • we show that this technique guarantees differential privacy for both the outputs and the learned weights of the network; • we demonstrate that we are able to achieve high accuracy in learning tasks while maintaining a reasonable (single-digit) privacy budget. The remainder of the paper is structured as follows. In Section 2, we give an overview of related work. Section 3 contains necessary background on differential privacy and generative adversarial networks. In Section 4, we describe our approach and provide its theoretical analysis and some practical aspects. Experimental results and implementation details are presented in Section 5, and Section 6 concludes the paper. The theorem proofs and additional details can be found in the Appendix. 2 RELATED WORK Given the level of attention to deep learning and the rising importance of privacy, it is unsurprising that there has been a significant increase in the number of publications on the topic of privacypreserving deep learning (and machine learning in general) in recent years. One take on the problem is to distribute training and use disjoint sets of training data. An example of such approach is the paper of Shokri & Shmatikov (2015), where they propose to train in a distributed manner by communicating sanitised updates from participants to a central authority. Such a method, however, yields high privacy losses as pointed out by Abadi et al. (2016) and Papernot et al. (2016). An alternative technique, also using disjoint training sets, suggested by Papernot et al. (2016), applies an ensemble of independently trained teacher models and semi-supervised knowledge transfer to a student model to achieve almost state-of-the-art (non-private) accuracy on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) with single-digit differential privacy bounds. This work was based on a paper by Hamm et al. (2016) and extends their method to generic learning models with any type of loss functions or optimisation algorithms. To the best of our knowledge, this is the most accurate privacy-preserving learning result to date, although one has to make sure that all the teaching ensemble and the aggregator are inaccessible to an adversary and the model is queried for teachers’ votes only a small number of times. A somewhat different approach is taken in Abadi et al. (2016). They suggest using differentially private stochastic gradient descent (for brevity, we will refer to it as DP-SGD in the remainder of the paper) to train deep learning models in a private manner. This approach allows to achieve high accuracy while maintaining low differential privacy bounds, and does not require distributed training. As stated above, our goal is to enable data usage by third party machine learning service providers to benefit from their expertise. All of the aforementioned methods, however, require every provider of such service to comply with the chosen privacy-preserving procedure which is not realistic. An alternative solution to this problem is to focus on sanitising data and making sure that training machine learning models on it would not compromise privacy. This direction is taken, for example, by Bindschaedler et al. (2017). The authors use a graphical probabilistic model to learn an underlying data distribution and transform real data points (seeds) into synthetic data points. Synthetic data is then filtered by a privacy test based on a plausible deniability criterion, which can be equivalent to differential privacy under certain conditions. Our approach, on the other hand, is to generate private data without requiring any real seeds. Thus, there is no need for privacy tests at the release stage, and the only requirement is that the generative model is privacy-preserving. By using GANs (Goodfellow et al., 2014) we ensure that our method is scalable and applicable to complex real-world data. 3 BACKGROUND This section gives a short introduction to GANs and differential privacy. Another important notion is the moments accountant method (Abadi et al., 2016) used to compute actual privacy bounds during training. However, since it is not essential for understanding the paper, we defer its description to the Appendix. 3.1 GENERATIVE ADVERSARIAL NETWORKS In recent years, generative adversarial networks (Goodfellow et al., 2014; Salimans et al., 2016) and its extensions, such as DCGAN (Radford et al., 2015) and EBGAN (Zhao et al., 2016), have received great attention and pushed the boundaries for deep generative models along with variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2015) and recursive neural networks (e.g. PixelRNN by Oord et al. (2016)). The most successful application for such generative models so far has been realistic image generation, perhaps due to abundance of training data and inherent geometric structure. In our work, we decided to choose GANs for several reasons. Firstly, GANs have shown very good results in practice, generating sharper images compared to other generative models. Secondly, the forward pass for generating data is much faster than that of, for instance, RNNs. Thirdly, the generator part of the model, the one we eventually interested in, does not interact with real training data at any point in the learning process, only getting gradients from the discriminator. In short, GANs can be described as follows. The model consists of two separate components: the generator G(z) and the discriminator D(x). The generator’s goal is to produce realistic samples of data based on a random variable z ∼ pz(z), while the discriminator is tasked with distinguishing real data samples x ∼ pdata(x) from generated samples x̂ ∼ pg(x). These two models are trained in an adversarial fashion, essentially playing a two-player game, with the goal to converge to the Nash equilibrium. Since training GANs in practice can be challenging, there is a number of commonly used tricks to improve convergence, such as using the Adam optimisation method (Kingma & Ba, 2015), feature matching, batch normalisation, and one-sided label smoothing (Salimans et al., 2016). We also observe improvements with adding labels to the discriminator (Odena, 2016) and unrolling discriminator updates (Metz et al., 2016). 3.2 DIFFERENTIAL PRIVACY The notion of differential privacy has been introduced and extended in a series of papers by Dwork et al. (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014), and is regarded as a strong privacy standard. It is defined for two adjacent datasets that differ by a single element: Definition 1. A randomized mechanismM : D → R with domain D and range R satisfies (ε, δ)differential privacy if for any two adjacent inputs d, d′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr [M(d) ∈ S] ≤ eε Pr [M(d′) ∈ S] + δ (1) Among the mechanisms to achieve differential privacy, two of the most widely used are Laplacian and Gaussian noise mechanisms. We are primarily interested in the latter, because of the improved privacy bounds analysis provided by the moments accountant method described in the Appendix. The Gaussian noise mechanism is defined as follows: M(d) , f(d) +N (0, s2f · σ2), (2) where sf is the sensitivity of f (i.e. sf = |f(d) − f(d′)| for f : D → R), and N (0, s2f · σ2) is the Gaussian distribution with the mean 0 and the standard deviation sfσ. 4 OUR APPROACH In this section, we describe our solution and provide a theoretical proof of privacy guarantees, as well as discuss limitations of the method. Let us begin with the formal problem statement. Problem Statement. Given the dataset X ∼ pdata(x), generate an artificial dataset X̃ = M(X) using the privacy mechanismM : X→ X, such that 1. it follows the same data distribution: X̃ ∼ pdata(x); 2. it provides differential privacy guarantees: Pr [M(X) ∈ S] ≤ eε Pr [M(X ′) ∈ S] + δ for any adjacent datasets X,X ′, and for any S ⊆ X. Here X = {X | X ∼ pdata(x)} is the space of all datasets formed by points drawn from the same distribution pdata(x). In most real-world problems, the true data distribution pdata(x) is unknown and needs to be estimated empirically. Since we are primarily interested in data synthesis, we will turn to generative models, and in particular we are going to use GANs as the mechanism to estimate pdata(x) and draw samples from it. If trained properly, GAN will provide a solution to the sub-problem (1). Despite the fact that the generator does not have access to the real dataX in the training process, one cannot guarantee differential privacy because of the information passed through with the gradients from the discriminator. A simple high level example will illustrate such breach of privacy. Let the datasets X,X ′ contain small real numbers. The only difference between these two datasets is the number x′ ∈ X ′, which happens to be extremely large. Since the gradients of the model depend on x′, one of the updates of the discriminator trained on X ′ may be very different from the rest, and this difference will the be propagated to the generator breaking privacy in general case. In order to maintain differential privacy guarantees, we propose the following solution. Proposition. Introduce a Gaussian noise layer in the discriminator network of GAN, so that its output, and therefore the weights of the trained generator, are differentially private with respect to the input data X . Use this generator to create a publishable differentially private dataset. The components of our solution are depicted in Figure 1. 4.1 THEORETICAL ANALYSIS OF THE APPROACH To validate the proposed solution, we first analyse it theoretically and show that the addition of a Gaussian noise layer in the discriminator network yields differential privacy in the generator. We will take the following steps to do that: 1. analyse privacy of the output of the noise layer w.r.t. the inputs X and X ′; 2. determine privacy bounds on the output of the whole network; 3. show that the same bounds hold for gradient updates. Let us start by describing the setting and notation used in the remainder of the section. We are given two adjacent datasets (X, y) and (X ′, y′) and a deterministic feed-forward neural network N with a Gaussian noise layer π. We denote the inputs of the layer π as xπ and x′π , and the outputs of the final layer of the network ŷ = N (X) and ŷ′ = N (X) correspondingly. To ensure (ε, δ)-differential privacy of π, the standard deviation of the noise has to be at least σ = C √ 2 log(1.25/δ)/ε, where C is the sensitivity of the preceding layer’s output xπ . Lemma 1. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . The proof of this lemma and the following Theorems 1 and 2 can be found in the appendix. Using Lemma 1, we are able demonstrate that the outputs of a feed-forward neural network with a Gaussian noise layer are differentially private with respect to the input data, which is expressed in the following theorem. Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Now, given that the forward pass is differentially private, we can formulate the main theoretical result of the paper: differential privacy of the gradients, and thus, the weights of the network N . Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Since we are interested in generating data using GANs, we will also need the following corollary to finalise the theoretical foundation for our framework. Corollary 1. (GANs) Given a generative adversarial network consisting of the generator G and the discriminator D with a privacy-preserving layer, gradient updates of G will have the same privacy bounds as gradient updates of D. Proof. This result trivially follows from Theorem 2 once we observe that generator updates are a function of discriminator updates. The above analysis is applicable for each individual iteration of the gradient descent, and privacy bounds on the final parameters can be obtained using composition theorems or a more efficient moments accountant method (Abadi et al., 2016). Note that Theorems 1 and 2 define differential privacy of the neural network with respect to the inputs X only, not taking into account the labels y. In certain cases, when labels of interest are already a public knowledge and do not reveal any information about data, it may be sufficient. However, if labels privacy is required, it is possible to incorporate it in the proposed approach in two ways. A first solution is to modify the learning problem so that labels become a part of data. For example, if one wants to train a face recognition model with privacy-breaking labels (e.g. specific names— John, Bob, Julia, etc.), it is possible to add these labels to X , and instead use True and False labels in y, indicating whether the input image and the input name correspond to each other. This way, label privacy will be handled by the same framework. Alternatively, one can use a separate privacy-preserving mechanism to retrieve labels during training. In this case, the eventual privacy w.r.t. the pair (X, y) may be derived from a composition of two mechanisms, which is shown in the theorem below. One possible candidate for such mechanism is the noisy voting scheme as used in Papernot et al. (2016). Theorem 3. (Private labels) Given a feed-forward neural network N with (ε1, δ1)–differentially private outputs ŷ, and the training labels ỹ satisfying (ε2, δ2)–differential privacy w.r.t. the true labels y, the gradient updates ω(i)X are (ε1+ε2, δ1+δ2)–differentially private with respect to (X, y) on each iteration i of gradient descent. Proof. There are two privacy mechanismsM1 andM2 applied to X and y correspondingly. Observe thatM1 does not have access to y, and thus, y cannot influence the output probabilities ofM1. The same is true forM2 and X . Consequently, we can assume that both mechanisms are applied to a pair (X, y). This allows us to employ a basic sequential composition theorem for differential privacy (Dwork & Lei, 2009) to obtain the privacy bounds. While it may appeal to use parallel composition instead of sequential composition to obtain a tighter bound, since X and y appear to be disjoint, it would be incorrect. The reason is that X and y are strongly correlated and breaking privacy of one can reveal the other. Alternatively, one could use advanced composition theorems (see e.g. Dwork et al. (2010); Kairouz et al. (2017)) to prove tighter privacy bounds, but it is not the goal of our paper. 4.2 PRACTICAL ASPECTS Based on the analysis above, we can do a number of important observations regarding applicability of this technique. First of all, the analysis is performed for feed-forward networks. Other architectures, such as RNNs, LSTMs, or memory networks, require additional investigation. Second, we focused on deterministic networks, meaning that the only two sources of stochasticity are data shuffling and privacypreserving noise layer π. Additional randomness in the network would complicate the proofs by introducing uncertainty in mappings. Third, conditions of Lemma 1 dictate that the network layers prior to π must preserve adjacency of the input. One layer breaking this condition is batch normalisation, because it introduces interdependencies between examples inside a batch, and just one different instance can change an entire batch. Summarising these limitations, the neural network under question must • be a feed-forward network; • not have randomised layers, e.g. dropout; • not have adjacency breaking layers before the privacy layer, e.g. batch normalisation. In the following section, we will touch upon some implications of it that affect practical performance. Note that these restrictions only apply to the network, in which we insert a privacypreserving layer, i.e. only the discriminator in our case. 5 EVALUATION In this section, we provide some implementation details and discuss evaluation results obtained on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) datasets. 5.1 EXPERIMENTAL SETUP We evaluate our solution as follows. First, we train a generative model on original datasets (using only training parts of each) with differential privacy by adding a Gaussian noise layer to the discriminator. We will call this model a teacher, analogously to Papernot et al. (2016). Then, we generate an artificial dataset of comparable size using the obtained model. Finally, we train a separate (nonprivate) classifier, which we call a student, on generated data and test it using held-out test sets. The last step is important from two perspectives: we can quantify the quality of generated samples as opposed to visual inspection typically done with GANs, and we can compare test errors to previously reported values. Note that there is no dependencies between the teacher and the student models. Moreover, student models are not constrained to neural networks and can be implemented as any type of machine learning algorithm. We choose two commonly used image classification datasets for our experiments: MNIST and SVHN. MNIST is a handwritten digit recognition dataset consisting of 60’000 training examples and 10’000 test examples, each example is a 28x28 size greyscale image. SVHN is also a digit recognition task, with 73’257 images for training and 26’032 for testing. The examples are coloured 32x32 pixel images of house numbers from Google Street View. 5.2 IMPLEMENTATION DETAILS Implementation was done in Python using Pytorch1. For generative model, we used a modified version of DCGAN by Radford et al. (2015). More specifically, the discriminator consists of five (four for MNIST) convolutional layers followed by leaky ReLU activations and a linear classifier with sigmoid output. We clip the output of the third convolutional layer (to ensure bounded sensitivity) and add Gaussian noise before passing it to the remaining convolutions with batch normalisation. The generator has two linear layers in front of five deconvolutions with batch normalisation and ReLU activations, ensued by fractional max pooling with tanh activation at the end. Both networks were trained using Adam optimiser (Kingma & Ba, 2015) with parameters typical for GAN training: learning rate set to 0.0002, β1 = 0.5, β2 = 0.999, and a batch size of 32. Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems. The student network is constructed of two convolutional layers with ReLU activations, batch normalisation and max pooling, followed by two fully connected layers with ReLU, and a softmax output layer. Again, training is performed by Adam algorithm. It is worth mentioning that this network does not achieve state-of-the-art performance on the used datasets, but we are primarily interested in evaluating the performance drop compared to a non-private model rather than getting the best test score. 5.3 DISCUSSION Using the experimental setup and implementation described above, we were able to get results close to Papernot et al. (2016) although not quite matching their accuracy for the same privacy bounds on SVHN. A performance gap is expected due to more generic nature of our method and a simpler privacy-preserving procedure. Overall, we managed to achieve 98.19% accuracy on MNIST and 83.49% accuracy on SVHN while maintaining approximately (3.45, 10−5) and (8, 10−6)- differential privacy. These numbers, along with the corresponding results of Papernot et al. (2016), 1http://pytorch.org can be found in Table 1. It is also worth noting that we did not perform rigorous hyper-parameter tuning due to limited computational resources; even better accuracy could be achieved have we had done that. Additionally, we trained a simple logistic regression model on MNIST, and obtained 88.96% accuracy on privately generated data compared to 92.58% on the original data, which confirms that any model can be used as a student. Examples of real and generated privacy-preserving images for MNIST and SVHN data are depicted on Figure 2. It can be seen that generated images don’t have the same contrast and dynamic range as real examples, which is not a problem in non-private GANs. We attribute it to the lack of batch normalisation in the discriminator. In addition to quantitative analysis of test errors and privacy bounds, we perform visual inspection of generated examples and corresponding nearest neighbours in real data. Figure 3 depicts a set of generated private examples and their nearest real counterparts. We observe that while some generated images are very close to real examples they don’t match exactly, differing either in shape, colour or surrounding digits. Moreover, a lot of pairs come from entirely different classes. 6 CONCLUSIONS We investigate the problem of non-interactive private data release with differential privacy guarantees. We employ generative adversarial networks to produce artificial privacy-preserving datasets. Contrary to existing privacy protection work in deep learning, this method allows to publish sanitised data and train any non-private models on it. The choice of GANs as a generative model ensures scalability and makes the technique suitable for real-world data with complex structure. Moreover, this method does not require running privacy tests on generated data before releasing it. Additionally, we introduce a novel method for preserving privacy of training data specific to deep neural networks based on adding noise in the embedding space during forward pass. It provides differential privacy guarantees and allows to construct privacy-preserving models in a simple and straightforward fashion, without modifying optimisation algorithms. In our experiments, we show that student models trained on artificial data can achieve high utility on MNIST dataset, while maintaining performance costs of added privacy and flexibility at acceptable levels on a more complicated SVHN data. Adding privacy directly to the trained model still provides better accuracy, and therefore, one of the possible directions for future work is to improve the quality of generated data for given privacy bounds. Extending presented technique and analysis to other types of deep neural networks provides another exciting opportunity for further research. 7 APPENDIX In this appendix, we state again and prove lemmas and theorems from Section 4.1. 7.1 PROOF OF LEMMA 1 Lemma 2. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . Proof. By definition of differential privacy: P [π(xπ) ∈ S] ≤ eεP [π(x′π) ∈ S] + δ, (3) for all adjacent xπ and x′π . We need to show that the same holds for all adjacent inputs X,X ′, i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ. Observe that we defined our network as deterministic (i.e. not having any randomness apart from initial data shuffling). Therefore, P [Xπ|X] = δxπ (Xπ), where δx(X) is a Dirac delta function. Conceptually, it means that the entire mass of the distribution of Xπ is concentrated on the point xπ . Using the above observation, P [π(X) ∈ S] = ∫ Xπ P [π(Xπ) ∈ S]P [Xπ|X] dXπ (4) = ∫ Xπ P [π(Xπ) ∈ S]δxπ (Xπ) dXπ (5) = P [π(xπ) ∈ S] (6) ≤ eεP [π(x′π) ∈ S] + δ (7) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ) δx′π (Xπ) dXπ (8) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ)P [Xπ|X ′] dXπ (9) =≤ eεP [π(X ′) ∈ S] + δ (10) Remark. Allowing randomised layers in the network would complicate the proof due to marginalisation over all possible outcomes Xπ corresponding to the input X . 7.2 PROOF OF THEOREM 1 Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Proof. Using the lemma above, we can show that outputs of the layer π are (ε, δ)-differentially private w.r.t. the inputs X , i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ (11) Since we require all the layers of N (except π) to be deterministic, there is a deterministic mapping from the outputs of π to ŷ. Let us denote this mapping f(π), and the preimage of a set S under this mapping f−1[S] (i.e. f−1[S] = {π : f(π) ∈ S}). Note that we treat X and X ′ as points in the space of all datasets X , and thus, π and f are not set-valued functions. Also, to avoid confusion, let us restate that f−1[S] is a preimage of a set S under f , and not a function inverse. Hence, we do not require f to be bijective, or even injective. Using the above, P [ŷ ∈ S] = P [f(π(X)) ∈ S] (12) = P [π(X) ∈ f−1[S]] (13) ≤ eεP [π(X ′) ∈ f−1[S]] + δ (14) = eεP [f(π(X ′)) ∈ S] + δ (15) = eεP [ŷ′ ∈ S] + δ, (16) for any pair of adjacent datasets X and X ′ (differing in one training example), thus, proving the theorem. 7.3 PROOF OF THEOREM 2 Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Proof. Let us denote by g(y, ŷ) = ∂L(y,ŷ)∂ω the gradient of the loss function w.r.t. network parameters. Similarly to Theorem 1, the preimage of a set T under g is denoted by g−1[y,T] = {ŷ : g(y, ŷ) ∈ T}. To better connect it with Theorem 1 let us define S = g−1[y,T]. Since gradient is a function of network outputs and labels, we have P [g(y, ŷ) ∈ T] = P [ŷ ∈ g−1[y,T]] = P [ŷ ∈ S]. (17) Combining the above results, P [ω (i) X ∈ T] = P [g(y, ŷ) ∈ T] (18) = P [ŷ ∈ S] (19) ≤ eεP [ŷ′ ∈ S] + δ (20) = eεP [g(y, ŷ′) ∈ T] + δ (21) = eεP [ω (i) X′ ∈ T] + δ, (22) for any pair of adjacent datasets X and X ′, demonstrating that weight updates stay (ε, δ)differentially private w.r.t to the input. 7.4 MOMENTS ACCOUNTANT The privacy bound produced by the strong composition theorem is often too loose, and therefore, we exploit the moments accountant technique developed by Abadi et al. (2016) for analysing their DP-SGD algorithm. To give the main idea of the method, let us start with defining the privacy loss. Definition 2. LetM : D → R be a randomized mechanism and d, d′ a pair of adjacent databases. Let aux denote an auxiliary input. For an outcome o ∈ R, the privacy loss at o is defined as: c(o;M,aux, d, d′) , log Pr [M(aux, d) = o] Pr [M(aux, d′) = o] . (23) And the privacy loss random variable C(M,aux, d, d′) is defined as c(M(d);M,aux, d, d′). The moments accountant is then defined as follows: Definition 3. Again, let M : D → R be a randomized mechanism, d, d′ a pair of adjacent databases, and aux denote an auxiliary input. The moments accountant is αM(λ) , max aux,d,d′ αM(λ;aux, d, d ′), (24) where αM(λ;aux, d, d′) , logE[exp(λC(M,aux, d, d′))] is a moment-generating function. In short, the moments accountant method tracks the bounds on the moments of the privacy loss random variable and then uses Markov inequality to obtain the tail bound on this random variable corresponding to the values of ε and δ.
1. What is the main contribution of the paper regarding GANs and synthetic data generation? 2. What are the concerns regarding the privacy aspect of the proposed technique? 3. How does the reviewer assess the clarity and presentation of the paper's content, particularly regarding privacy analysis? 4. Are there any suggestions for improving the privacy analysis and presentation of the paper?
Review
Review The paper proposes a technique for differentially privately generating synthetic data using GAN, and experimentally showed that their method achieves both high utility and good privacy. The idea of building a differentially private GAN and generating differentially private synthetic data is very interesting. However, my main concern is the privacy aspect of the technique, as it is not explained clearly enough in the paper. There is also room for improvement in the presentation and clarity of the paper. More details: - About the differential privacy aspect: The author didn't provide detailed privacy analysis of the Gaussian noise layer, and I don't find the values of the sensitivity (C = 1) provided in the answer to a public comment easy to see. Also, the paper mentioned that the batch size is 32 and the author mentioned in the comment that the std of the Gaussian noise is 0.7, and the number of epoch is 50 or 150. I think these values would lead to epsilon much larger than 8 (as in Table 1). However, in Section 5.2, it is said that "Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems." I don't see clearly why privacy amplification is needed here, and why using moments accountant and privacy amplification can lead to data-dependent privacy loss. In general, I don't find the privacy analysis of this paper clear and detailed enough to convince me about the correctness of the privacy results. However, I am very happy to change my opinion if there are convincing details in the rebuttal. - About the presentation: As a paper proposing a differentially private algorithm, detailed and formal analysis of the privacy guarantees is essential to convince the readers. For example, I think it would be much better if there is a formal theorem showing the sensitivity of the Gaussian noise layer. And it would be better to restate (in Appendix 7.4) not only the definition of moments accountant, but the composition and tail bound, as well as the moments accountant for the Gaussian mechanism, since they are all used in the privacy analysis of this paper.
ICLR
Title Generating Differentially Private Datasets Using GANs Abstract In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. 1 INTRODUCTION Following recent advancements in deep learning (Silver et al., 2016; He et al., 2015; Wu et al., 2016), more and more people and companies are interested in putting their data in use as they see that machine learning is able to generate a wide range of benefits, including financial, social, medical, security, and so on. At the same time, however, such models are often able to capture a fine level of detail in training data potentially compromising privacy of individuals who’s features sharply differ from others. This problem is partially mitigated by the use of regularisation techniques that “smooth out” outstanding details and avoid overfitting, but it does not give any theoretical privacy guarantees. Recent research by Fredrikson et al. (2015) suggests that even without access to internal model parameters, by using hill climbing on output probabilities of a neural network, it is possible to recover (up to a certain degree) individual faces from a training set. The latter result is especially disturbing knowing that deep learning models are becoming an integral part of our lives, making its way to phones, smart watches, cars, and appliances. And since these models are often trained on customers data, such training set recovery techniques will endanger privacy even without access to the manufacturer’s servers where these models are being trained. In order to protect privacy while still benefiting from the use of statistics and machine learning, a number of techniques for data anonymisation has been developed over the years, including kanonymity (Sweeney, 2002), l-diversity (Machanavajjhala et al., 2007), t-closeness (Li et al., 2007), and differential privacy (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014). The latter has been recognised as a strong standard and is widely accepted by the research community. We study the task of publishing datasets in a differentially private manner. In particular, we are interested in solving two problems. First, we want to be able to benefit from the use of machine learning by third parties while protecting sensitive information of individuals in our dataset. Second, we want to be sure that even if adversaries get access to the third-party model trained on our data, they would not be able to recover private information. An additional challenge is to be able to publish an entire dataset, as opposed to being required to use a query interface like in a typical differentially private framework. In this paper, we propose a simple solution to this problem. The main idea of our approach is to use generative adversarial networks (GANs) introduced in Goodfellow et al. (2014), trained with addition of Gaussian noise in the embedding space, to create artificial datasets that follow the same distribution as the real data while providing differential privacy guarantees. This method has a number of advantages over the methods proposed earlier. First of all, this solution is simple to implement, e.g. it does not require training ensembles of models on disjoint data. Second, it can be done on a user side, and not on the side of the machine learning service provider, which eliminates the necessity of trusting this service provider or implementing privacy-preserving models locally. Third, similarly to Abadi et al. (2016), privacy cannot be compromised even if the entire trained model is accessible to an adversary. Our contributions in this paper are the following: • we propose a novel mechanism for non-interactive differentially private data release, and to the best of our knowledge this is the first practical solution for complex real-world data; • we introduce a new technique of preserving privacy in neural networks via adding noise in the forward pass during training; • we show that this technique guarantees differential privacy for both the outputs and the learned weights of the network; • we demonstrate that we are able to achieve high accuracy in learning tasks while maintaining a reasonable (single-digit) privacy budget. The remainder of the paper is structured as follows. In Section 2, we give an overview of related work. Section 3 contains necessary background on differential privacy and generative adversarial networks. In Section 4, we describe our approach and provide its theoretical analysis and some practical aspects. Experimental results and implementation details are presented in Section 5, and Section 6 concludes the paper. The theorem proofs and additional details can be found in the Appendix. 2 RELATED WORK Given the level of attention to deep learning and the rising importance of privacy, it is unsurprising that there has been a significant increase in the number of publications on the topic of privacypreserving deep learning (and machine learning in general) in recent years. One take on the problem is to distribute training and use disjoint sets of training data. An example of such approach is the paper of Shokri & Shmatikov (2015), where they propose to train in a distributed manner by communicating sanitised updates from participants to a central authority. Such a method, however, yields high privacy losses as pointed out by Abadi et al. (2016) and Papernot et al. (2016). An alternative technique, also using disjoint training sets, suggested by Papernot et al. (2016), applies an ensemble of independently trained teacher models and semi-supervised knowledge transfer to a student model to achieve almost state-of-the-art (non-private) accuracy on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) with single-digit differential privacy bounds. This work was based on a paper by Hamm et al. (2016) and extends their method to generic learning models with any type of loss functions or optimisation algorithms. To the best of our knowledge, this is the most accurate privacy-preserving learning result to date, although one has to make sure that all the teaching ensemble and the aggregator are inaccessible to an adversary and the model is queried for teachers’ votes only a small number of times. A somewhat different approach is taken in Abadi et al. (2016). They suggest using differentially private stochastic gradient descent (for brevity, we will refer to it as DP-SGD in the remainder of the paper) to train deep learning models in a private manner. This approach allows to achieve high accuracy while maintaining low differential privacy bounds, and does not require distributed training. As stated above, our goal is to enable data usage by third party machine learning service providers to benefit from their expertise. All of the aforementioned methods, however, require every provider of such service to comply with the chosen privacy-preserving procedure which is not realistic. An alternative solution to this problem is to focus on sanitising data and making sure that training machine learning models on it would not compromise privacy. This direction is taken, for example, by Bindschaedler et al. (2017). The authors use a graphical probabilistic model to learn an underlying data distribution and transform real data points (seeds) into synthetic data points. Synthetic data is then filtered by a privacy test based on a plausible deniability criterion, which can be equivalent to differential privacy under certain conditions. Our approach, on the other hand, is to generate private data without requiring any real seeds. Thus, there is no need for privacy tests at the release stage, and the only requirement is that the generative model is privacy-preserving. By using GANs (Goodfellow et al., 2014) we ensure that our method is scalable and applicable to complex real-world data. 3 BACKGROUND This section gives a short introduction to GANs and differential privacy. Another important notion is the moments accountant method (Abadi et al., 2016) used to compute actual privacy bounds during training. However, since it is not essential for understanding the paper, we defer its description to the Appendix. 3.1 GENERATIVE ADVERSARIAL NETWORKS In recent years, generative adversarial networks (Goodfellow et al., 2014; Salimans et al., 2016) and its extensions, such as DCGAN (Radford et al., 2015) and EBGAN (Zhao et al., 2016), have received great attention and pushed the boundaries for deep generative models along with variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2015) and recursive neural networks (e.g. PixelRNN by Oord et al. (2016)). The most successful application for such generative models so far has been realistic image generation, perhaps due to abundance of training data and inherent geometric structure. In our work, we decided to choose GANs for several reasons. Firstly, GANs have shown very good results in practice, generating sharper images compared to other generative models. Secondly, the forward pass for generating data is much faster than that of, for instance, RNNs. Thirdly, the generator part of the model, the one we eventually interested in, does not interact with real training data at any point in the learning process, only getting gradients from the discriminator. In short, GANs can be described as follows. The model consists of two separate components: the generator G(z) and the discriminator D(x). The generator’s goal is to produce realistic samples of data based on a random variable z ∼ pz(z), while the discriminator is tasked with distinguishing real data samples x ∼ pdata(x) from generated samples x̂ ∼ pg(x). These two models are trained in an adversarial fashion, essentially playing a two-player game, with the goal to converge to the Nash equilibrium. Since training GANs in practice can be challenging, there is a number of commonly used tricks to improve convergence, such as using the Adam optimisation method (Kingma & Ba, 2015), feature matching, batch normalisation, and one-sided label smoothing (Salimans et al., 2016). We also observe improvements with adding labels to the discriminator (Odena, 2016) and unrolling discriminator updates (Metz et al., 2016). 3.2 DIFFERENTIAL PRIVACY The notion of differential privacy has been introduced and extended in a series of papers by Dwork et al. (Dwork, 2006; Dwork et al., 2006; Dwork, 2008; Dwork et al., 2014), and is regarded as a strong privacy standard. It is defined for two adjacent datasets that differ by a single element: Definition 1. A randomized mechanismM : D → R with domain D and range R satisfies (ε, δ)differential privacy if for any two adjacent inputs d, d′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr [M(d) ∈ S] ≤ eε Pr [M(d′) ∈ S] + δ (1) Among the mechanisms to achieve differential privacy, two of the most widely used are Laplacian and Gaussian noise mechanisms. We are primarily interested in the latter, because of the improved privacy bounds analysis provided by the moments accountant method described in the Appendix. The Gaussian noise mechanism is defined as follows: M(d) , f(d) +N (0, s2f · σ2), (2) where sf is the sensitivity of f (i.e. sf = |f(d) − f(d′)| for f : D → R), and N (0, s2f · σ2) is the Gaussian distribution with the mean 0 and the standard deviation sfσ. 4 OUR APPROACH In this section, we describe our solution and provide a theoretical proof of privacy guarantees, as well as discuss limitations of the method. Let us begin with the formal problem statement. Problem Statement. Given the dataset X ∼ pdata(x), generate an artificial dataset X̃ = M(X) using the privacy mechanismM : X→ X, such that 1. it follows the same data distribution: X̃ ∼ pdata(x); 2. it provides differential privacy guarantees: Pr [M(X) ∈ S] ≤ eε Pr [M(X ′) ∈ S] + δ for any adjacent datasets X,X ′, and for any S ⊆ X. Here X = {X | X ∼ pdata(x)} is the space of all datasets formed by points drawn from the same distribution pdata(x). In most real-world problems, the true data distribution pdata(x) is unknown and needs to be estimated empirically. Since we are primarily interested in data synthesis, we will turn to generative models, and in particular we are going to use GANs as the mechanism to estimate pdata(x) and draw samples from it. If trained properly, GAN will provide a solution to the sub-problem (1). Despite the fact that the generator does not have access to the real dataX in the training process, one cannot guarantee differential privacy because of the information passed through with the gradients from the discriminator. A simple high level example will illustrate such breach of privacy. Let the datasets X,X ′ contain small real numbers. The only difference between these two datasets is the number x′ ∈ X ′, which happens to be extremely large. Since the gradients of the model depend on x′, one of the updates of the discriminator trained on X ′ may be very different from the rest, and this difference will the be propagated to the generator breaking privacy in general case. In order to maintain differential privacy guarantees, we propose the following solution. Proposition. Introduce a Gaussian noise layer in the discriminator network of GAN, so that its output, and therefore the weights of the trained generator, are differentially private with respect to the input data X . Use this generator to create a publishable differentially private dataset. The components of our solution are depicted in Figure 1. 4.1 THEORETICAL ANALYSIS OF THE APPROACH To validate the proposed solution, we first analyse it theoretically and show that the addition of a Gaussian noise layer in the discriminator network yields differential privacy in the generator. We will take the following steps to do that: 1. analyse privacy of the output of the noise layer w.r.t. the inputs X and X ′; 2. determine privacy bounds on the output of the whole network; 3. show that the same bounds hold for gradient updates. Let us start by describing the setting and notation used in the remainder of the section. We are given two adjacent datasets (X, y) and (X ′, y′) and a deterministic feed-forward neural network N with a Gaussian noise layer π. We denote the inputs of the layer π as xπ and x′π , and the outputs of the final layer of the network ŷ = N (X) and ŷ′ = N (X) correspondingly. To ensure (ε, δ)-differential privacy of π, the standard deviation of the noise has to be at least σ = C √ 2 log(1.25/δ)/ε, where C is the sensitivity of the preceding layer’s output xπ . Lemma 1. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . The proof of this lemma and the following Theorems 1 and 2 can be found in the appendix. Using Lemma 1, we are able demonstrate that the outputs of a feed-forward neural network with a Gaussian noise layer are differentially private with respect to the input data, which is expressed in the following theorem. Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Now, given that the forward pass is differentially private, we can formulate the main theoretical result of the paper: differential privacy of the gradients, and thus, the weights of the network N . Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Since we are interested in generating data using GANs, we will also need the following corollary to finalise the theoretical foundation for our framework. Corollary 1. (GANs) Given a generative adversarial network consisting of the generator G and the discriminator D with a privacy-preserving layer, gradient updates of G will have the same privacy bounds as gradient updates of D. Proof. This result trivially follows from Theorem 2 once we observe that generator updates are a function of discriminator updates. The above analysis is applicable for each individual iteration of the gradient descent, and privacy bounds on the final parameters can be obtained using composition theorems or a more efficient moments accountant method (Abadi et al., 2016). Note that Theorems 1 and 2 define differential privacy of the neural network with respect to the inputs X only, not taking into account the labels y. In certain cases, when labels of interest are already a public knowledge and do not reveal any information about data, it may be sufficient. However, if labels privacy is required, it is possible to incorporate it in the proposed approach in two ways. A first solution is to modify the learning problem so that labels become a part of data. For example, if one wants to train a face recognition model with privacy-breaking labels (e.g. specific names— John, Bob, Julia, etc.), it is possible to add these labels to X , and instead use True and False labels in y, indicating whether the input image and the input name correspond to each other. This way, label privacy will be handled by the same framework. Alternatively, one can use a separate privacy-preserving mechanism to retrieve labels during training. In this case, the eventual privacy w.r.t. the pair (X, y) may be derived from a composition of two mechanisms, which is shown in the theorem below. One possible candidate for such mechanism is the noisy voting scheme as used in Papernot et al. (2016). Theorem 3. (Private labels) Given a feed-forward neural network N with (ε1, δ1)–differentially private outputs ŷ, and the training labels ỹ satisfying (ε2, δ2)–differential privacy w.r.t. the true labels y, the gradient updates ω(i)X are (ε1+ε2, δ1+δ2)–differentially private with respect to (X, y) on each iteration i of gradient descent. Proof. There are two privacy mechanismsM1 andM2 applied to X and y correspondingly. Observe thatM1 does not have access to y, and thus, y cannot influence the output probabilities ofM1. The same is true forM2 and X . Consequently, we can assume that both mechanisms are applied to a pair (X, y). This allows us to employ a basic sequential composition theorem for differential privacy (Dwork & Lei, 2009) to obtain the privacy bounds. While it may appeal to use parallel composition instead of sequential composition to obtain a tighter bound, since X and y appear to be disjoint, it would be incorrect. The reason is that X and y are strongly correlated and breaking privacy of one can reveal the other. Alternatively, one could use advanced composition theorems (see e.g. Dwork et al. (2010); Kairouz et al. (2017)) to prove tighter privacy bounds, but it is not the goal of our paper. 4.2 PRACTICAL ASPECTS Based on the analysis above, we can do a number of important observations regarding applicability of this technique. First of all, the analysis is performed for feed-forward networks. Other architectures, such as RNNs, LSTMs, or memory networks, require additional investigation. Second, we focused on deterministic networks, meaning that the only two sources of stochasticity are data shuffling and privacypreserving noise layer π. Additional randomness in the network would complicate the proofs by introducing uncertainty in mappings. Third, conditions of Lemma 1 dictate that the network layers prior to π must preserve adjacency of the input. One layer breaking this condition is batch normalisation, because it introduces interdependencies between examples inside a batch, and just one different instance can change an entire batch. Summarising these limitations, the neural network under question must • be a feed-forward network; • not have randomised layers, e.g. dropout; • not have adjacency breaking layers before the privacy layer, e.g. batch normalisation. In the following section, we will touch upon some implications of it that affect practical performance. Note that these restrictions only apply to the network, in which we insert a privacypreserving layer, i.e. only the discriminator in our case. 5 EVALUATION In this section, we provide some implementation details and discuss evaluation results obtained on MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011) datasets. 5.1 EXPERIMENTAL SETUP We evaluate our solution as follows. First, we train a generative model on original datasets (using only training parts of each) with differential privacy by adding a Gaussian noise layer to the discriminator. We will call this model a teacher, analogously to Papernot et al. (2016). Then, we generate an artificial dataset of comparable size using the obtained model. Finally, we train a separate (nonprivate) classifier, which we call a student, on generated data and test it using held-out test sets. The last step is important from two perspectives: we can quantify the quality of generated samples as opposed to visual inspection typically done with GANs, and we can compare test errors to previously reported values. Note that there is no dependencies between the teacher and the student models. Moreover, student models are not constrained to neural networks and can be implemented as any type of machine learning algorithm. We choose two commonly used image classification datasets for our experiments: MNIST and SVHN. MNIST is a handwritten digit recognition dataset consisting of 60’000 training examples and 10’000 test examples, each example is a 28x28 size greyscale image. SVHN is also a digit recognition task, with 73’257 images for training and 26’032 for testing. The examples are coloured 32x32 pixel images of house numbers from Google Street View. 5.2 IMPLEMENTATION DETAILS Implementation was done in Python using Pytorch1. For generative model, we used a modified version of DCGAN by Radford et al. (2015). More specifically, the discriminator consists of five (four for MNIST) convolutional layers followed by leaky ReLU activations and a linear classifier with sigmoid output. We clip the output of the third convolutional layer (to ensure bounded sensitivity) and add Gaussian noise before passing it to the remaining convolutions with batch normalisation. The generator has two linear layers in front of five deconvolutions with batch normalisation and ReLU activations, ensued by fractional max pooling with tanh activation at the end. Both networks were trained using Adam optimiser (Kingma & Ba, 2015) with parameters typical for GAN training: learning rate set to 0.0002, β1 = 0.5, β2 = 0.999, and a batch size of 32. Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems. The student network is constructed of two convolutional layers with ReLU activations, batch normalisation and max pooling, followed by two fully connected layers with ReLU, and a softmax output layer. Again, training is performed by Adam algorithm. It is worth mentioning that this network does not achieve state-of-the-art performance on the used datasets, but we are primarily interested in evaluating the performance drop compared to a non-private model rather than getting the best test score. 5.3 DISCUSSION Using the experimental setup and implementation described above, we were able to get results close to Papernot et al. (2016) although not quite matching their accuracy for the same privacy bounds on SVHN. A performance gap is expected due to more generic nature of our method and a simpler privacy-preserving procedure. Overall, we managed to achieve 98.19% accuracy on MNIST and 83.49% accuracy on SVHN while maintaining approximately (3.45, 10−5) and (8, 10−6)- differential privacy. These numbers, along with the corresponding results of Papernot et al. (2016), 1http://pytorch.org can be found in Table 1. It is also worth noting that we did not perform rigorous hyper-parameter tuning due to limited computational resources; even better accuracy could be achieved have we had done that. Additionally, we trained a simple logistic regression model on MNIST, and obtained 88.96% accuracy on privately generated data compared to 92.58% on the original data, which confirms that any model can be used as a student. Examples of real and generated privacy-preserving images for MNIST and SVHN data are depicted on Figure 2. It can be seen that generated images don’t have the same contrast and dynamic range as real examples, which is not a problem in non-private GANs. We attribute it to the lack of batch normalisation in the discriminator. In addition to quantitative analysis of test errors and privacy bounds, we perform visual inspection of generated examples and corresponding nearest neighbours in real data. Figure 3 depicts a set of generated private examples and their nearest real counterparts. We observe that while some generated images are very close to real examples they don’t match exactly, differing either in shape, colour or surrounding digits. Moreover, a lot of pairs come from entirely different classes. 6 CONCLUSIONS We investigate the problem of non-interactive private data release with differential privacy guarantees. We employ generative adversarial networks to produce artificial privacy-preserving datasets. Contrary to existing privacy protection work in deep learning, this method allows to publish sanitised data and train any non-private models on it. The choice of GANs as a generative model ensures scalability and makes the technique suitable for real-world data with complex structure. Moreover, this method does not require running privacy tests on generated data before releasing it. Additionally, we introduce a novel method for preserving privacy of training data specific to deep neural networks based on adding noise in the embedding space during forward pass. It provides differential privacy guarantees and allows to construct privacy-preserving models in a simple and straightforward fashion, without modifying optimisation algorithms. In our experiments, we show that student models trained on artificial data can achieve high utility on MNIST dataset, while maintaining performance costs of added privacy and flexibility at acceptable levels on a more complicated SVHN data. Adding privacy directly to the trained model still provides better accuracy, and therefore, one of the possible directions for future work is to improve the quality of generated data for given privacy bounds. Extending presented technique and analysis to other types of deep neural networks provides another exciting opportunity for further research. 7 APPENDIX In this appendix, we state again and prove lemmas and theorems from Section 4.1. 7.1 PROOF OF LEMMA 1 Lemma 2. If the output of the noise layer π(xπ) is (ε, δ)-differentially private w.r.t. xπ and the network layers before π preserve adjacency of X and X ′, then π(X) is also (ε, δ)-differentially private w.r.t. X . Proof. By definition of differential privacy: P [π(xπ) ∈ S] ≤ eεP [π(x′π) ∈ S] + δ, (3) for all adjacent xπ and x′π . We need to show that the same holds for all adjacent inputs X,X ′, i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ. Observe that we defined our network as deterministic (i.e. not having any randomness apart from initial data shuffling). Therefore, P [Xπ|X] = δxπ (Xπ), where δx(X) is a Dirac delta function. Conceptually, it means that the entire mass of the distribution of Xπ is concentrated on the point xπ . Using the above observation, P [π(X) ∈ S] = ∫ Xπ P [π(Xπ) ∈ S]P [Xπ|X] dXπ (4) = ∫ Xπ P [π(Xπ) ∈ S]δxπ (Xπ) dXπ (5) = P [π(xπ) ∈ S] (6) ≤ eεP [π(x′π) ∈ S] + δ (7) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ) δx′π (Xπ) dXπ (8) = ∫ Xπ (eεP [π(Xπ) ∈ S] + δ)P [Xπ|X ′] dXπ (9) =≤ eεP [π(X ′) ∈ S] + δ (10) Remark. Allowing randomised layers in the network would complicate the proof due to marginalisation over all possible outcomes Xπ corresponding to the input X . 7.2 PROOF OF THEOREM 1 Theorem 1. (Forward pass) The output ŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X . Proof. Using the lemma above, we can show that outputs of the layer π are (ε, δ)-differentially private w.r.t. the inputs X , i.e. P [π(X) ∈ S] ≤ eεP [π(X ′) ∈ S] + δ (11) Since we require all the layers of N (except π) to be deterministic, there is a deterministic mapping from the outputs of π to ŷ. Let us denote this mapping f(π), and the preimage of a set S under this mapping f−1[S] (i.e. f−1[S] = {π : f(π) ∈ S}). Note that we treat X and X ′ as points in the space of all datasets X , and thus, π and f are not set-valued functions. Also, to avoid confusion, let us restate that f−1[S] is a preimage of a set S under f , and not a function inverse. Hence, we do not require f to be bijective, or even injective. Using the above, P [ŷ ∈ S] = P [f(π(X)) ∈ S] (12) = P [π(X) ∈ f−1[S]] (13) ≤ eεP [π(X ′) ∈ f−1[S]] + δ (14) = eεP [f(π(X ′)) ∈ S] + δ (15) = eεP [ŷ′ ∈ S] + δ, (16) for any pair of adjacent datasets X and X ′ (differing in one training example), thus, proving the theorem. 7.3 PROOF OF THEOREM 2 Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputs ŷ, weight updates ω(i)X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Proof. Let us denote by g(y, ŷ) = ∂L(y,ŷ)∂ω the gradient of the loss function w.r.t. network parameters. Similarly to Theorem 1, the preimage of a set T under g is denoted by g−1[y,T] = {ŷ : g(y, ŷ) ∈ T}. To better connect it with Theorem 1 let us define S = g−1[y,T]. Since gradient is a function of network outputs and labels, we have P [g(y, ŷ) ∈ T] = P [ŷ ∈ g−1[y,T]] = P [ŷ ∈ S]. (17) Combining the above results, P [ω (i) X ∈ T] = P [g(y, ŷ) ∈ T] (18) = P [ŷ ∈ S] (19) ≤ eεP [ŷ′ ∈ S] + δ (20) = eεP [g(y, ŷ′) ∈ T] + δ (21) = eεP [ω (i) X′ ∈ T] + δ, (22) for any pair of adjacent datasets X and X ′, demonstrating that weight updates stay (ε, δ)differentially private w.r.t to the input. 7.4 MOMENTS ACCOUNTANT The privacy bound produced by the strong composition theorem is often too loose, and therefore, we exploit the moments accountant technique developed by Abadi et al. (2016) for analysing their DP-SGD algorithm. To give the main idea of the method, let us start with defining the privacy loss. Definition 2. LetM : D → R be a randomized mechanism and d, d′ a pair of adjacent databases. Let aux denote an auxiliary input. For an outcome o ∈ R, the privacy loss at o is defined as: c(o;M,aux, d, d′) , log Pr [M(aux, d) = o] Pr [M(aux, d′) = o] . (23) And the privacy loss random variable C(M,aux, d, d′) is defined as c(M(d);M,aux, d, d′). The moments accountant is then defined as follows: Definition 3. Again, let M : D → R be a randomized mechanism, d, d′ a pair of adjacent databases, and aux denote an auxiliary input. The moments accountant is αM(λ) , max aux,d,d′ αM(λ;aux, d, d ′), (24) where αM(λ;aux, d, d′) , logE[exp(λC(M,aux, d, d′))] is a moment-generating function. In short, the moments accountant method tracks the bounds on the moments of the privacy loss random variable and then uses Markov inequality to obtain the tail bound on this random variable corresponding to the values of ε and δ.
1. What is the focus and contribution of the paper on generating differentially private datasets using GANs? 2. What are the strengths and weaknesses of the paper, particularly in terms of its experimental results and privacy guarantees? 3. Do you have any concerns or questions regarding the paper's approach to ensuring differential privacy in GANs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper considers the problem of generating differentially private datasets using GANs. To the best of my knowledge this is the first paper to study differential privacy for GANs. The paper is fairly well-written but has several major weaknesses: -- Privacy parameter eps = 8 used in the experiments implies that the likelihood of any event can change by e^8 which is roughly 3000, which is an unacceptably high privacy loss. Moreover, even for this high privacy loss the accuracy on the SVHN dataset seems to drop a lot (92% down to 83%) when proposed mechanism is used. -- I didn't find a formal proof of the privacy guarantee in the paper. The authors say that the privacy guarantee is based on the moments accountant method, but I couldn't find the proof anywhere. The method itself is introduced in Section 7.4 but isn't used for the proof. Thus the paper seems to be incomplete.
ICLR
Title Unsupervised Model-based Pre-training for Data-efficient Control from Pixels Abstract Controlling artificial agents from visual sensory data is an arduous task. Reinforcement learning (RL) algorithms can succeed in this but require large amounts of interactions between the agent and the environment. To alleviate the issue, unsupervised RL proposes to employ self-supervised interaction and learning, for adapting faster to future tasks. Yet, whether current unsupervised strategies improve generalization capabilities is still unclear, especially in visual control settings. In this work, we design an unsupervised RL strategy for data-efficient visual control. First, we show that world models pre-trained with data collected using unsupervised RL can facilitate adaptation for future tasks. Then, we analyze several design choices to adapt faster, effectively reusing the agents’ pre-trained components, and planning in imagination, with our hybrid planner, which we dub Dyna-MPC. By combining the findings of a large-scale empirical study, we establish an approach that strongly improves performance on the Unsupervised RL Benchmark, requiring 20× less data to match the performance of supervised methods. The approach also demonstrates robust performance on the Real-Word RL benchmark, hinting that the approach generalizes to noisy environments. 1 INTRODUCTION Modern successes of deep reinforcement learning (RL) have shown promising results for control problems (Levine et al., 2016; OpenAI et al., 2019; Lu et al., 2021). However, training an agent for each task individually requires a large amount of task-specific environment interactions, incurring huge redundancy and prolonged human supervision. Developing algorithms that can efficiently adapt and generalize to new tasks has hence become an active area of research in the RL community. In computer vision and natural language processing, unsupervised learning has enabled training models without supervision to reduce sample complexity on downstream tasks (Chen et al., 2020; Radford et al., 2019). In a similar fashion, unsupervised RL (URL) agents aim to learn about the environment without the need for external reward functions, driven by intrinsic motivation (Pathak et al., 2017; Burda et al., 2019a; Bellemare et al., 2016). Any learned models can then be adapted to downstream tasks, aiming to reduce the required amount of interactions with the environment. Recently, the Unsupervised RL Benchmark (URLB) (Laskin et al., 2021) established a common protocol to compare self-supervised algorithms across several domains and tasks from the DMC Suite (Tassa et al., 2018). In the benchmark, an agent is allowed a task-agnostic pre-training stage, where it can interact with the environment in an unsupervised manner, followed by a fine-tuning stage where, given a limited budget of interactions with the environment, the agent should quickly adapt for a specific task. However, the results obtained by Laskin et al. (2021) suggest that current URL approaches may be insufficient to perform well on the benchmark, especially when the inputs of the agent are pixel-based images. World models have proven highly effective for solving RL tasks from vision both in simulation (Hafner et al., 2021; 2019a) and in robotics (Wu et al., 2022), and they are generally data-efficient as they enable learning behavior in imagination (Sutton, 1991). Inspired by previous work on exploration (Sekar et al., 2020), we hypothesize this feature could be key in the unsupervised RL setting, as a pre-trained world model can leverage previous experience to learn behavior for new tasks in imagination, and in our work, we study how to best exploit this feature. We adopt the URLB setup to perform a large-scale study, involving several unsupervised RL methods for pre-training model-based agents, different fine-tuning strategies, and a new improved algorithm for efficiently planning with world models. The resulting approach, which combines the findings of our study, strongly improves performance on the URL benchmark from pixels, nearly achieving the asymptotic performance of supervised RL agents, trained with 20x more task-specific data, and bridging the gap with low-dimensional state inputs (Laskin et al., 2021). Contributions. This work does not propose a novel complex method. Rather, we study the interplay of various existing components and propose a novel final solution that outperforms existing state of the art on URLB by a staggering margin. Specifically: • we demonstrate that unsupervised RL combined with world models can be an effective pre-training strategy to enable data-efficient visual control (Section 3.1), • we study the interplays between the agent’s pre-trained components that improve sample efficiency during fine-tuning (Section 3.2), • we propose a novel hybrid planner we call Dyna-MPC, which allows us to effectively combine behaviors learned in imagination with planning (Section 3.3), • combining our findings into one approach, we outperform previous approaches on URLB from pixels, nearly solving the benchmark (Section 4.1), • we show the approach is resilient to environment perturbations, evaluating it on the Real World RL benchmark (Dulac-Arnold et al., 2020) (Section 4.2), • we present an extensive analysis of the pre-trained agents, aimed at understanding in-depth the current findings and limitations (Section 4.3). An extensive empirical evaluation, supported by more than 2k experiments, among main results, analysis and ablations, was used to carefully design our method. We hope that our large-scale evaluation will inform future research towards developing and deploying pre-trained agents that can be adapted with considerably less data to more complex/realistic tasks, as it has happened with unsupervised pre-trained models for vision (Parisi et al., 2022) and language (Ahn et al., 2022). 1 2 PRELIMINARIES Reinforcement learning. The RL setting can be formalized as a Markov Decision Process (MDP), denoted with the tuple {S,A, T,R, γ}, where S is the set of states, A is the set of actions, T is the state transition dynamics, R is the reward function, and γ is a discount factor. The objective of an RL agent is to maximize the expected discounted sum of rewards over time for a given task, also called return, and indicated as Gt = ∑T k=t+1 γ (k−t−1)rk. In continuous-action settings, you can learn an actor, i.e. a model predicting the action to take from a certain state, and a critic, i.e. a model that estimates the expected value of the actor’s actions over time. Actor-critic algorithms can be combined with the expressiveness of neural network models to solve complex continuous control tasks (Haarnoja et al., 2018; Lillicrap et al., 2016; Schulman et al., 2017). 1The PyTorch code for the experiments will be open-sourced upon publication. Unsupervised RL. In this work, we investigate the problem of fast adaptation for a downstream task, after a phase of unsupervised training and interaction with the environment. Our training routine, based on the setup of URLB (Laskin et al., 2021), is made of two phases: a pre-training (PT) phase, where the agent can interact with a task-agnostic version of the environment for up to 2M frames, and a fine-tuning phase (FT), where the agent is given a task to solve and a limited budget of 100k frames. During the PT phase, rewards are removed so that sensible information about the environment should be obtained by exploring the domain-dependent dynamics, which is expected to remain similar or unchanged in the downstream tasks. During FT, the agent receives task-specific rewards when interacting with the environment. As the agent has no prior knowledge of the task, it should both understand the task and solve it efficiently, in a limited interaction budget. In this setting, the performance of unsupervised model-free RL (Yarats et al., 2022) were shown to be insufficient as reported in (Laskin et al., 2021). We believe the key reason for this is that model-free RL algorithms can exploit only a little part of the information obtained with self-supervised interaction, as they rely uniquely on actor and critic’s predictions. World models. In this work, we ground upon the DreamerV2 agent (Hafner et al., 2021), which learns a world model (Ha & Schmidhuber, 2018; Hafner et al., 2019b) predicting the outcomes of actions in the environment. The dynamics is captured into a latent space Z , providing a compact representation of the high-dimensional inputs. The world model consists of the following components: Encoder: et = fϕ(st), Decoder: pϕ(st|zt), Dynamics: pϕ(zt|zt−1, at−1), Posterior: qϕ(zt|zt−1, at−1, et). The model states zt have both a deterministic component, modeled using the recurrent state of a GRU (Chung et al., 2014), and a (discrete) stochastic component. The encoder and decoder are convolutional neural networks (CNNs) and the remaining components are multi-layer perceptrons (MLPs). The world model is trained end-to-end by optimizing an evidence lower bound (ELBO) on the log-likelihood of the data collected in the environment (Hafner et al., 2019b;a). For the encoder and the decoder networks, we used the same architecture as in Hafner et al. (2021). For control, the agent learns latent actor πθ(at|zt) and critic vψ(zt) networks. Both components are trained online within the world model, by imagining the model state outcomes of the actions produced by the actor, using the model dynamics. Rewards for imagined trajectories are provided by a reward predictor, pϕ(rt|zt) trained to predict environment rewards, and they are combined with the critic predictions to produce a GAE-λ estimate of the returns (Schulman et al., 2016). The actor maximizes estimates of returns, backpropagating gradients through the model dynamics. The hyperparameters for the agent, which we keep fixed across all domains/tasks, can be found in Appendix H. 3 UNSUPERVISED MODEL-BASED PRE-TRAINING FOR DATA-EFFICIENT CONTROL FROM PIXELS To best exploit self-supervised pre-training for data-efficient adaptation, it is important that the agent: (i) meaningfully interacts with the environment during the PT phase, to discover useful transitions; (ii) successfully reuses the modules learned during PT for fast adaptation; and (iii) efficiently employs the FT phase to quickly understand and master the downstream task. In this section, we use an experiment-driven approach to find which methods or components are best at tackling these challenges. Experimental procedure. We employ the URL benchmark that consists of three control domains, Walker, Quadruped and Jaco, and twelve tasks, four per domain. To evaluate the agents, we take snapshots of the agent at different times during training, i.e. 100k, 500k, 1M, and 2M frames, and finetune the agent for 100k frames. In all bar plots, we show average normalized returns on downstream tasks with error bars showing the standard deviation. To normalize results in a comparable way for all tasks, we train a fully-supervised agent with 2M frames per task. We use the mean performance of this agent, which we refer to as "oracle", as the reference scores to normalize our results in the plots (details in Appendix A). For all experiments, results are presented with at least three random seeds. 3.1 UNSUPERVISED PRE-TRAINING In the PT stage, unsupervised RL can be used to explore the environment, collecting the data to train the components of the agent. The resulting networks are then used to initialize respective components in the agent deployed for the downstream task, aiming to reduce sample complexity during FT. The first question we address is thus "What kinds of agents work best with unsupervised pre-training?". Unsupervised RL methods can be grouped into three categories (Laskin et al., 2021): knowledgebased, which aim to increase the agent’s knowledge by maximizing error prediction (Pathak et al., 2017; 2019; Burda et al., 2019b), data-based, which aim to achieve diversity of data (Yarats et al., 2021; Liu & Abbeel, 2021b) and competence-based, which aim to learn diverse skills (Liu & Abbeel, 2021a; Eysenbach et al., 2019). In Figure 2a we report the results from Laskin et al. (2021), showing that none of these approaches is particularly effective on URLB when combined with the DrQ model-free agent (Yarats et al., 2022), state-of-the-art in RL from pixels, where the data collected with unsupervised RL is used to pre-train the agent’s actor, critic, and encoder. To demonstrate that world models can be used to effectively exploit unsupervised RL data collection for fast adaptation, we study multiple approaches and use them to pre-train the Dreamer’s world model and latent actor. As knowledge-based methods we employ ICM (Pathak et al., 2017), LBS (Mazzaglia et al., 2021b), Plan2Explore (P2E; (Sekar et al., 2020)), and RND (Burda et al., 2019b). As a data-based approach, we choose APT (Liu & Abbeel, 2021b), and as competence-based approaches, we adopt DIAYN (Eysenbach et al., 2019) and APS (Liu & Abbeel, 2021a). Finally, we also test random actions, as a naive maximum entropy baseline (Haarnoja et al., 2018). Details on these methods and how we combined them with the Dreamer algorithm are discussed in Appendix B. Aggregating results per category, in Figure 2b, we show that by leveraging a pre-trained world model the overall performance improves over time for all categories, as opposed to the model-free results, where only knowledge-based approaches slightly improve. In particular, data-based and knowledge-based methods are more effective in the Walker and Quadruped domains, and random actions and competence-based are more effective in the Jaco domain. Detailed results for each method are available in Appendix E. 3.2 FINETUNING PRE-TRAINED AGENTS Some of the components learned during the PT phase, such as the world model, can be reused for fast adaptation during FT. However, as the reward is changing from pseudo-reward to task reward when changing from the PT to the FT phase, it is not clear if pre-training of the actor and critic can help the downstream task. To shed light on this, we seek to answer: "Which pre-trained components are useful for downstream tasks?". Here, we test different fine-tuning configurations, where we copy the weights of some of the PT components into the agent to fine-tune for the downstream task. We run the tests for the several unsupervised RL methods combined with Dreamer that we presented in Section 3.1 and show aggregated results in Figure 3 (detailed results per each method in Appendix E). Overall, fine-tuning the PT world model provides the most significant boost in performance, strengthening the hypothesis that world models are very effective with unsupervised RL. Fine-tuning the actor improves performance slightly in Walker and remarkably in Quadruped, but is harmful in the Jaco domain. An intuitive explanation is that in the Quadruped and Walker moving tasks, the exploratory behaviors help discovering reward faster. Instead, in the Jaco goal-reaching tasks, the agent needs to reach a certain target with sparse rewards. If the PT actor is initialized to move far from the target, the agent might struggle to find rewards in the small FT budget. Finally, using a PT critic is systematically worse. This can be explained by the discrepancy between intrinsic rewards and task rewards. 3.3 LEARNING AND PLANNING IN IMAGINATION Knowing a model of the environment, traditional model-based control approaches, e.g. model predictive control (MPC) (Williams et al., 2015; Chua et al., 2018; Richards, 2005), can be used to plan the agent’s action. Nonetheless, using actor-critic methods has several advantages, such as amortizing the cost of planning by caching previously computed (sub)optimal actions and computing long-term returns from a certain state, without having to predict outcomes that are far in the future. More recent hybrid strategies, such as LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022), allow combining trajectories sampled from the actor with trajectories sampled from a distribution over actions that is iteratively improved. The model and the critic are used to evaluate the trajectories, Algorithm 1 Dyna-MPC Require: Actor θ, Critic ψ, World Model ϕ 1: µ, σ: initial parameters for sampling actions 2: N,Nπ: num trajectories, num policy trajectories 3: zt, H: current model state, planning horizon 4: for each iteration j = 1..J do 5: Sample N trajectories of length H from N (µ, σ2I), starting from zt 6: Sample Nπ trajectories of length H using the actor πθ, starting from zt 7: Estimate future states, using the model, and returns, using reward and critic predictions 8: Update µ and σ using MPPI (Williams et al., 2015) 9: end for 10: return at ∼ N (µt, σ2t I) improve them, and eventually select the most promising actions, i.e. planning. In this section, we answer the question: Can we accelerate downstream task adaptation by leveraging planning? Dyna-MPC. As we pre-train a world model, we could exploit planning in latent space to adapt with limited additional environment interaction. One problem with the above strategies is that they are based upon learning off-policy actor and critic, which in our context would prevent us from exploiting the PT model to learn the actor and critic in imagination. In order to enable hybrid planning with the behavior learned in imagination (Hafner et al., 2019a), we develop a modification of these approaches, which we call Dyna-MPC, that combines the actor and critic learned in imagination with MPPI (Williams et al., 2015) for planning. As detailed in Algorithm 1, at each time step, we imagine a set of latent trajectories using the model, by sampling actions from a time-dependent multivariate gaussian and from the actor policy, trained with Dreamer in imagination. Returns for MPPI are estimated using reward predictions by the model and the critic. MPPI is used to update the parameters of the multivariate gaussian for J iterations. Details on how returns are estimated and the MPPI updates work are given in Appendix C. One significant difference with previous approaches is that the policy in Dyna-MPC is learned on-policy in imagination, thus no correction for learning off-policy is required (Sikchi et al., 2020). Given the insights from the previous section, we use the world models and actors pre-trained with all the different unsupervised strategies we considered (see Section 3.1)2 and test their FT performance with and without planning with Dyna-MPC. Aggregated scores are reported in Figure 4, and detailed results for each method are available in Appendix E. We observe that adopting Dyna-MPC is always beneficial, as it improves the average performance and reduces variance in all domains. 3.4 OUR METHOD: COMBINING THE FINDINGS TOGETHER In the large-scale study, we explored several design choices to establish the most adequate approach to tackle the URL benchmark, aiming to provide a general recipe for data-efficient adaptation thanks to unsupervised RL. Our approach combines the main findings we presented in the previous sections: 1. learning a model-based agent with data collected using unsupervised RL (Figure 2); 2. fine-tuning the PT world model (always) and the pre-trained actor (where beneficial), while learning the critic from scratch (Figure 3); 3. adopting a hybrid planner, as the proposed Dyna-MPC, to leverage both learning and planning in imagination (Figure 4). An overview of the method is illustrated in Figure 1 and the algorithm is presented in Appendix D. We believe the above recipe could be generally applied to unsupervised settings, also outside of URLB, with the precaution that one should carefully make two decisions: (a) whether fine-tuning the PT actor is meaningful for the downstream task or it’s better to re-learn it from scratch, (b) what is the best URL strategy to collect data. Both decisions strongly depend on the target domain/task and so it is difficult to assess their implications beforehand. However, adopting unsupervised strategies that specifically focus on interacting with interesting elements of the environment, e.g. objects, or that quickly explore large areas of the environment at the beginning of fine-tuning may help exploring and revisiting crucial states of the environment more easily (Parisi et al., 2021). For URLB, we already established (a) that the PT actor is effective in Walker and Quadruped tasks, but it is better re-learn the actor from scratch in Jaco, in Section 3.2. To decide which URL strategy to use (b) we present a detailed comparison of the performance of our approach using different exploration strategies. The results in Figure 5 show that the agent using LBS during pre-training performs overall best, as it has the highest interquartile mean (IQM) and mean scores, and the lowest optimality gap. Thus, in the evaluation section, we present Ours (LBS) as our approach. 4 EVALUATION AND ANALYSIS 4.1 UNSUPERVISED REINFORCEMENT LEARNING BENCHMARK In Section 3, we presented our approach, which combines the findings from our empirical large-scale study on URLB. In Figure 6, we compare the results from the original URLB paper with our approach. The performance of our method is superior in all domains. The second strongest method (DrQ with Disagreement) approaches an overall performance of 40% of the respective supervised baseline performance, while our method recovers more than 90% of its supervised counterpart. 4.2 REAL-WORLD REINFORCEMENT LEARNING BENCHMARK Algorithms developed in simulation struggle to transfer to real-world systems due to a series of implicit assumptions that are rarely satisfied in real environments, e.g. URLB assumes the dynamics between PT and FT stay the same. The RWRL benchmark (Dulac-Arnold et al., 2020) considers several challenges that are common in real-world systems and implements them on top of DMC tasks. We employ vision-based variants of the Walker Walk and Quadruped Walk tasks from the RWRL benchmark. These tasks introduce system delays, stochasticity, and perturbations of the robot’s model and sensors, which are applied with three degrees of intensity to the original environment, i.e. ‘easy’, ‘medium’, and ‘hard’ (details in Appendix F). We seek to answer whether in perturbed settings: • does unsupervised PT enable faster adaptation? • does unsupervised RL provide an advantage over random exploration? • does hybrid planning improve performance, as in URLB? In Figure 7, we present the results of our method, using LBS during PT, with and without planning with Dyna-MPC for FT, and compare to random exploration and training from scratch for 100k, 1M, and 2M frames. Crucially, the PT models are trained in the vanilla task-agnostic version of the environments from the DMC Suite, so that the results highlight the extent to which models trained in ideal conditions generalize to perturbed settings when fine-tuned in a low-data regime. 2We exceptionally do not use the pre-trained actor in the Jaco tasks, as this was shown to lead to better performance in Section 3.2 (Figure 3). Overall, we found that fine-tuning PT models offer an advantage over training from scratch for 100k frames, despite all the variations in the environment. Furthermore, on the Quadruped Easy and Medium settings, our method performs better than Dreamer@1M and not far from Dreamer@2M while using 10x and 20x less task-specific data, respectively. Our method also performs close to Dreamer@1M/2M in the Walker Easy task. Unsupervised RL for data collection (Ours) outperforms random actions in the ‘easy’ and ‘medium’ settings, showing that a better PT model yields higher FT performance, even when the dynamics of the downstream task is affected by misspecifications and noisy factors. Finally, in contrast with the findings on URLB, adopting the hybrid planner is not generally beneficial. We believe this is because the model’s predictions are less certain and precise in this setting and thus cannot inform the short-term planner accurately. 4.3 EXTENDED ANALYSIS To better analyze the learned components, we conducted a range of additional experiments. For conciseness, detailed descriptions of the experimental settings are deferred to Appendix G and we briefly summarize the takeaways in this section. Learning rewards online. We verify whether having to discover and learn the reward function during FT impacts performance. In Figure 8, we compare against agents that (violating the URLB settings) know the task in advance and can pre-train a reward predictor during the PT stage. We see that learning the reward predictor does not affect performance significantly for dense-reward tasks, such as the Walker and Quadruped tasks. However, in sparser reward tasks, i.e. the Jaco ones, knowing reward information in advance provides an advantage. Efficient strategies to find sparse rewards efficiently represent a challenge for future research. More details in Appendix G.1. Zero-shot adaptation. Knowing a reward predictor from PT, it could be possible to perform zero-shot control with MPC methods if the model and the reward function allow it. In Figure 9, we show that despite the zero-shot MPC (ZS) offers an advantage over Dreamer@100k, the FT phase is crucial to deliver high performance on the downstream tasks, as the agent uses this phase to collect missing information about the environment and the task. Further details in Appendix G.2. Latent dynamics discrepancy (LDD). We propose a novel metric, Latent Dynamics Discrepancy, which evaluates the distance between the latent predictions of the PT model and the same model after FT on a task. In Figure 10, we show the correlation between our metric and the performance ratio between using the PT model and the FT model for planning (see Appendix G.3 for a detailed explanation). We observed a strong negative Pearson correlation (−0.62, p-value: 0.03), highlighting that major updates in the model dynamics during FT played an important role in improving performance. Unsupervised rewards and performance. We analyze the correlation between the normalized performance of different agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent in Table 1. In particular, the correlation for LBS, which overall performs best in URLB, has a statistical significance, as its p-value is < 0.05. We believe this correlation might be one of the causes of LBS outstanding performance. Further insights are provided in Appendix G.4. 5 RELATED WORK Model-based control. Dynamics models combined with powerful search methods have led to impressive results on a wide variety of tasks such as Atari (Schrittwieser et al., 2020) and continuous control (Hafner et al., 2019a; Janner et al., 2019; Sikchi et al., 2021; Lowrey et al., 2018). LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022) combine temporal difference learning and MPC. The model proposed with TD-MPC is task-oriented and thus requires a task to accelerate learning. In our work, we focus on unsupervised model learning, grounding on the DreamerV2 model (Hafner et al., 2021), whose supervision comes from predicting the environment’s observations. Methods that use no reconstruction could generalize better to visual differences (Mazzaglia et al., 2021a; Ma et al., 2020) but they lose in explainability, as they cannot decode imagined trajectories. Unsupervised RL. Prior to our work, the large-scale study of curiosity (Burda et al., 2018) provided an insightful analysis of the performance of knowledge-based methods in the reward-free setting. In our work, we leverage the URLB setting, to provide an analysis of a combination of model-based control techniques with unsupervised RL. This allowed us to formulate a strategy to adapt pre-trained models to visual control tasks in a data-efficient manner. Closely, Sekar et al. (2020) combines adapts the Disagreement (Pathak et al., 2019) to work with Dreamer (Hafner et al., 2019a). In our work, in addition to analyzing a wider choice of unsupervised RL strategies, we show how to better exploit the agent PT components for adaptation, and we propose a hybrid planner to improve data-efficiency. Transfer learning. In the field of transfer learning, fine-tuning is the most used approach. However, fine-tuning all the pre-trained agent components may not be the most effective strategy. In transfer learning for RL, they have studied this problem, mainly with the objective of transferring from one environment to another (Farebrother et al., 2018; Sasso et al., 2022; van Driessel & Francois-Lavet, 2021). Instead, we analyze which agent’s components should be transferred from the unsupervised PT stage to the supervised FT stage when the environment’s dynamics is assumed to stay similar or be the same. Another stream of work has studied successor representations, to enable a better transfer of the agent’s actor-critic (Hansen et al., 2020; Barreto et al., 2016). 6 CONCLUSION In order to accelerate the development and deployment of learning agents for real-world tasks, it is crucial that the employed algorithms can adapt in a data-efficient way for multiple tasks. Our study provides an empirical analysis of several design choices, which allowed us to obtain near-optimal performance in URLB and that showed robustness to perturbations in the environment, on the RWRL benchmark. We also analyzed several aspects of the learned models, to understand what could be improved further in the future to ease the adaptation process. Limitations. In the Jaco reaching tasks, we found that a bad initialization of the pre-trained actor can actually harm the agent’s performance. While competence-based approaches should address this limitation, by learning a variety of skill behaviors, their performance on the other domains has been subpar. Future work should aim to find a more general approach to pre-train behavior for fast adaptation or improve the exploration capabilities of competence-based approaches. Another issue we encountered, on the RWRL benchmark, is that if the environment introduces too intense perturbations during adaptation, relying on the predictions of the adopted world model becomes problematic, to the extent that exploiting a planner is not useful anymore. Developing more resilient models that can be trained in an unsupervised fashion and used for data-efficient planning, even in presence of complex perturbations, will be the focus of future studies. Reproducibility statement We reported in the main text (Algorithm 1) the pseudo-code for DynaMPC and in Appendix D the pseudo-code for our end-to-end approach. We also provide instructions on how we implemented our methods (Appendix B) and all the model and training hyperparameters to implement and reproduce the results (Table 4). We will release our code and scripts. A NORMALIZATION SCORES In Table 2, we report the mean scores for the URLB Expert, used to normalize the scores in the URLB paper, and for Dreamer@2M, which we use to normalize returns of our methods, where both supervised baselines have been trained individually on each of the 12 tasks from URLB for 2M frames. We additionally report mean and standard deviations for the best performing unsupervised baseline from URLB. which is Disagreement (Pathak et al., 2019), and our method (using LBS for data collection). We notice that our scores approach the Dreamer@2M’s scores in several tasks, eventually outperforming them in a few tasks (e.g. Walker Flip, Quadruped Jump). We believe this merit is due both to the exploration pre-training, which may have found more rewarding trajectories than greedy supervised RL optimization and of the improved Dyna-MPC planning strategy. B INTEGRATING UNSUPERVISED RL STRATEGIES We summarize here the unsupervised RL approaches tested and how we integrated them with the Dreamer algorithm for exploration. For all methods, rewards have been normalized during training using an exponential moving average with momentum 0.95, with the exceptions of RND, which follows its original reward normalization (Burda et al., 2019b), and APS, whose rewards are not normalized because they are used to regress the skill that is closer to the downstream task during FT. ICM. The Intrinsic Curiosity Module (ICM; Pathak et al. (2017)) defines intrinsic rewards as the error between states projected in a feature space and a feature dynamics model’s predictions. We use the Dreamer agent encoder et = fϕ(st) to obtain features and train a forward dynamics model g(et|et−1, at−1) to compute rewards as: rt ICM ∝ ∥g(et|et−1, at−1)− et∥2. As the rewards for ICM require environment states (going through the encoder to compute prediction error), we train a reward predictor to allow estimating rewards in imagination. Plan2Explore. The Plan2Explore algorithm (Sekar et al., 2020) is an adaptation of the Disagreement algorithm (Pathak et al., 2019) for latent dynamics models. An ensemble of forward dynamics models is trained to predict the features embedding et = fϕ(st), given the previous latent state and actions, i.e. g(et|zt−1, at−1, wk), where wk are the parameters of the k-th predictor. Intrinsic rewards are defined as the variance of the ensemble predictions: rt P2E ∝ Var({g(et|zt−1, at−1, wk)|k ∈ [1, ...,K]}). Plan2Explore requires only latent states and actions, thus it can be computed directly in imagination. We used an ensemble of 5 models. RND. Random Network Distillation (RND; Burda et al. (2019b)) learns to predict the output of a randomly initialized network n(st) that projects the states into a more compact random feature space. As the random network is not updated during training, the prediction error should diminish for already visited states. The intrinsic reward here is defined as: rt RND ∝ ∥g(st)− n(st)∥2 As the rewards for RND requires environment states (to encode with the random network), we train a reward predictor to allow estimating rewards in imagination. LBS. In Latent Bayesian Surprise (LBS; Mazzaglia et al. (2021b)), they use the KL divergence between the posterior and the prior of a latent dynamics model as a proxy for the information gained with respect to the latent state variable, by observing new states. Rewards are computed as: rt LBS ∝ DKL[q(zt|zt−1, at−1, et)∥p(zt|zt−1, at−1)] As the rewards for LBS requires environment states (to compute the posterior distribution), we train a reward predictor to allow estimating rewards in imagination. APT. Active Pre-training (APT; Liu & Abbeel (2021b)) uses a particle-based estimator based on the K nearest-neighbors algorithm (Singh et al., 2003) to estimate entropy for a given state. We implement APT on top of the deterministic component of the latent states z̄t, providing rewards as: rt APT ∝ k∑ i log ∥z̄t − z̄it∥2, where k are the nearest-neighbor states in latent space. As APT requires only latent states, it can be computed directly in imagination. We used k = 12 nearest neighbors. DIAYN. Diversity is All you need (DIAYN; Eysenbach et al. (2019)) maximizes the mutual information between the states and latent skills w. We implement DIAYN on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(wt)−H(wt|zt). The entropy H(wt) is kept maximal by sampling wt ∼ Unif(wt) from a discrete uniform prior distribution, while H(wt|zt) is estimated learning a discriminator q(wt|zt). We compute intrinsic rewards as: rt DIAYN ∝ log q(wt|zt) Additionally, DIAYN maximizes the entropy of the actor, so we add an entropy maximization term to Dreamer’s objective (Haarnoja et al., 2018). As DIAYN requires model states and skills sampled from a uniform distribution to compute rewards, we can directly compute them in imagination. For FT, the skill adapted is the one with the highest expected rewards, considering the states and rewards obtained in the initial episodes. APS. Active Pre-training with Successor features (APS; Liu & Abbeel (2021a)) maximizes the mutual information between the states and latent skills w. We implement APS on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(zt)−H(zt|wt). The entropy term H(zt) is estimated using a particle-based estimator on top of the deterministic component of the latent states z̄t, as for APT, while the term H(zt|wt) is estimated learning a discriminator q(zt|wt). The intrinsic rewards for APS can be written as: rt APS ∝ rtAPT + log q(wt|zt) As APS requires model states and uniformly sampled skills to compute rewards, we can directly compute them in imagination. For FT, the skill to adapt is selected using linear regression over the states and rewards obtained in the initial episodes (Liu & Abbeel, 2021a). C DYNA-MPC To further improve data efficiency, we chose to use an hybrid planner that combines reinforcement learning and MPC (Hansen et al., 2022; Sikchi et al., 2020; Lowrey et al., 2018). Previous works leveraged model-free off-policy algorithms (Hansen et al., 2022; Sikchi et al., 2020) to learn the actor and critic in a more computationally efficient manner. The policy used to act on the environment combines action samples from the actor network with MPC, while the critic and the actor are learned "offline" from previously collected data. This has several benefits but also leads to an issue referred to as “actor divergence" (Sikchi et al., 2020), which consists of the policy used for data collection being different from the policy that is used to learn the critic. In our study, we found that using the PT world model to learn the actor and the critic is crucial to improve data-efficiency during FT (see Figure 3). Thus, we discard the option of learning the actor and critic with off-policy deep RL. Instead, we design a new hybrid planner, which we call Dyna-MPC, that learns actor and critic functions in the model imagination (Sutton, 1991), using the Dreamer algorithm (Hafner et al., 2019a), and then combines their predictions with MPPI (Williams et al., 2015) for acting on the environment. By doing so we mitigate the "actor divergence" issue as actor and critic are learned on-policy on the trajectories generated with the model. The critic is learned in the model’s imagination, computing the expected value of the actor’s actions using GAE-λ estimates of the returns (Schulman et al., 2016; Hafner et al., 2019a): V λt = rt + γt { (1− λ)vψ(zt+1) + λV λt+1 if t < H, vψ(zH) if t = H, (1) where rt is the reward for state zt, yielded by the reward predictor of the world model, and H is the imagination horizon. When computing returns for MPPI we use the same return estimates. At each time step, we use MPPI to select the best action. MPPI iteratively fits the parameters of a time-dependent multivariate Gaussian distribution with diagonal covariance, updating mean and standard deviation parameters using an importance weighted average of the top-k trajectories with the highest estimated returns. At every step, N trajectories Γi = {a0,i, a1,i, ..., aH,i} of length H are obtained sampling actions from the distributions at ∼ N (µt, σ2t I) and Nπ trajectories are sampled from the actor network at ∼ πθ(at|zt) and their outcomes are predicted using the model. At each MPPI iteration, the distribution parameters are updated as follows: µ = ∑k i=1 ΩiΓ ⋆ i∑N i=1 Ωi , σ = max √√√√∑Ni=1 Ωi(Γ⋆i − µ)2∑N i=1 Ωi , ϵ , (2) where Ωi = exp(τV λi ), τ is a temperature parameter, ⋆ indicates the trajectory is in the top-k, and ϵ is a clipping factor to avoid too small standard deviations (Hansen et al., 2022). To reduce the number of iterations required for convergence, we reuse the 1-step shifted mean obtained at the previous timestep (Argenson & Dulac-Arnold, 2020). D ALGORITHM Algorithm 2 Unsupervised Model-based Pre-Training for Data-efficient Control from Pixels Require: Actor θ, Critic ψ, World Model ϕ 1: Intrinsic reward rint, extrinsic reward rext 2: Environment, M , downstream tasks Tk, k ∈ [1, . . . ,M ] 3: Pre-train frames NPT, fine-tune frames NFT, environment frames/update τ 4: Initial model state z0, hybrid planner Dyna-MPC, replay buffers DPT, DFT 5: 6: // Pre-training 7: for t = 0, . . . , NPT do 8: Draw action from the actor, at ∼ πθ(at|zt) 9: Apply action to the environment, st+1 ∼ P (·|st,at) 10: Add transition to replay buffer, DPT ← DPT ∪ (st,at, st+1) 11: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 12: if t mod τ = 0 then 13: Update world model parameters ϕ on the data from the replay buffer DPT 14: Update actor-critic parameters {θ, ψ} in imagination, maximizing rint 15: end if 16: end for 17: Output pre-trained parameters {ψPT, θPT, ϕPT} 18: 19: // Fine-tuning 20: for Tk ∈ [T1, . . . , TM ] do 21: Initialize fine-tuning world-model with ϕPT 22: (Optional) Initialize fine-tuning actor with θPT 23: for t = 0, . . . , NFT do 24: Draw action from the actor, at ∼ πθ(at|zt) 25: Use the planner for selecting best action, at ∼ Dyna-MPC(zt) 26: Apply action to the environment, st+1, rextt ∼ P (·|st,at) 27: Add transition to replay buffer, DFT ← DFT ∪ (st,at, rextt , st+1) 28: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 29: if t mod τ = 0 then 30: Update world model parameters ϕ on the data from the replay buffer DFT 31: Update actor-critic parameters {θ, ψ} in imagination, maximizing rext 32: end if 33: end for 34: Evaluate performance on Tk 35: end for E ADDITIONAL RESULTS We present complete results, for each unsupervised RL method, for the large-scale study experiments presented in Section 3. Can a pre-training stage longer than 2M frames be beneficial? In Figure 14, we report FT results with our full method, every 1M frames up to 5M PT frames. The aggregated results show that, adopting our method, longer PT can increase performance further, especially until 4M steps. The performance in all domains keeps increasing or remains steady until 5M steps, with two exceptional cases, Walker for Plan2Explore and Jaco for APS, where performance drops between 4M and 5M steps. For these experiments, we kept the size of the model and all the hyperparameters unvaried with respect to the 2M PT frames experiments but we increased the replay buffer maximum size to 5M frames. Increasing model capacity, and adopting additional precautions, such as annealing learning rate, it is possible that the agent could benefit even more from longer pre-training and we aim to analyse this more in details for future work. F RWRL SETTINGS We take the Quadruped and Walker tasks from the RWRL benchmark and replace the low-dimensional sensor inputs with RGB camera inputs. While this removes some of the perturbations planned in the benchmark (Dulac-Arnold et al., 2020), such as noise in the sensors, it introduces the difficulty of a different dynamics in pixel space (due to the other perturbations), compared to the one observed during pre-training in the vanilla simulation environment. G EXTENDED ANALYSIS We note that, to run the experiments faster, we did not use Dyna-MPC for the extended analysis. Furthermore, the Jaco tasks used slightly differ from the original ones in URLB, only in that the target to reach cannot move. This allows consistency of the reward function between PT and FT, so that a reward predictor can be trained on ‘reward-labelled’ PT data. However, because of this change, the performance in Jaco may differ from the other main results (particularly in Figure 8 and Figure 9). G.1 LEARNING REWARDS ONLINE In Figure 8 of the main text, we measure the gap in performance between pre-trained agents that have no knowledge of the reward function at the beginning of fine-tuning and agents whose reward predictor is initialized from a reward predictor learned on top of the unsupervised pre-training data (violating the URLB settings). Crucially, the agent during unsupervised PT can learn the reward predictor without affecting neither the model learning or the exploration process. To not affect the model, gradients are stopped between the reward predictor and the rest of the world model. To not affect exploration, the rewards used to train the agent’s actor and critic remain the intrinsic rewards, for exploration. G.2 ZERO-SHOT ADAPTATION Using agents that have access to a PT reward predictor, we explore the idea of zero-shot adaptation using MPC, which is trying to solve the URLB tasks using only planning and the pre-trained world model and reward predictor. In order to obtain good performance, this assumes that the model correctly learned the dynamics of the environment and explored rewarding transitions that are relevant to the downstream task, during pre-training. In Figure 9 of the main text, we compare the results of performing MPC in a zero-shot setting (ZS) with the performance of an MPC agent that is allowed 100k frames for fine-tuning (FT). As for the MPC method, we employ MPPI (Williams et al., 2015). Because these experiments are particularly expensive to run, we just them on the agents trained with the Plan2Explore URL approach. We observe that the performance of zero-shot MPC is generally weak. While it overall performs better than the non-pre-trained model, simply applying MPC leveraging the pre-trained world model and reward predictor trained on the pre-training stage data is not sufficient to guarantee satisfactory performance. The fact that exploiting the fine-tuning stage using the same MPC approach generally boosts performance demonstrates that the model has a major benefit from the FT stage. Still, the performance of MPC generally lacks behind the actor-critic performance, suggesting that, especially in a higher-dimensional action space such as the Quadruped one, amortizing the cost of planning with actor-critic seems crucial to achieve higher performance. G.3 LATENT DYNAMICS DISCREPANCY Model misspecification is a useful measure to assess the uncertainty or inaccuracy of the model dynamics. It is computed as the difference between the dynamics predictions and the real environment dynamics. The metric helps build robust RL strategies, that take the dynamics uncertainty into account while searching for the optimal behavior (Talvitie, 2018). However, with pixel-based inputs the dynamics of the environment are observed through high-dimensional images. And this in-turn could hurt the metric evaluation, since the distances in pixel space can be misleading. In our approach, we use a model-based RL agent that learns the dynamics model in a compact latent space Z . Our novel metric, Latent Dynamics Discrepancy (LDD), quantifies the “misspecification" of the learned latent dynamics accordingly. The metric quantifies the distance between the predictions of the pre-trained model and the same model after fine-tuning on a downstream task. However, as the decoder of the world model gets updated during fine-tuning, the latent space mapping between model states z and environment states s might drift. For this reason, we freeze the agent’s decoder weights, so that the model can only improve the posterior and the dynamics. This ensures that the mapping Z −→ S remains unchanged and allows to compare the dynamics model after fine-tuning with the one before fine-tuning. In order to measure the distance between the distribution output by the dynamics network, we chose the symmetrical Jensen-Shannon divergence: LDD = E(zt,at) [ DJS[pFT(zt+1|zt, at)∥pPT(zt+1|zt, at)] ] , (3) where the expectation is taken over the previous model states zt sampled from the fine-tuned posterior qFT(zt), actions at−1 sampled from an oracle actor π∗(at|zt), so that we evaluate the metric on optimal trajectories, whose environment’s state distribution corresponds to the stationary distribution induced by the actor st ∼ dπ ∗ (st). We used 30 trajectories per task in our evaluation. We observe in our experiments that there exists a correlation between the metric and the performance ratio between a zero-shot model and a fine-tuned model (see Figure 10 in the main paper). The key observation is that major updates in the model dynamics during fine-tuning phase played an important role in improving the agent’s performance, compared to the pre-trained model and zero-shot performance. Future research may attempt to reduce such dependency by either improving the model learning process, so that the pre-trained dynamics could have greater accuracy, or the data collection process, proposing URL methods that directly aid to reduce such uncertainty. G.4 UNSUPERVISED REWARDS AND PERFORMANCE We further analyzed the correlation between the normalized performance of the different exploration agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent. A strong negative correlation between the two factors should indicate that the agent is more interested in seeing the optimal trajectories when its performance is low on the task. We observe that there is negative correlation between Plan2Explore (P2E), ICM, LBS’s performance and their intrinsic rewards, while we found∼0 correlation for RND (see Table 1 in the main text). Out of the methods tested, LBS significantly demonstrated the correlation, as its p-value is < 0.05. This is likely one of the key factors for the high performance of the agent using LBS on the benchmark. One possible explanation is that LBS searches for transitions of the environment that are difficult to predict for the dynamics, so the model likely learns those transitions more accurately, facilitating planning during the fine-tuning stage. Another potential explanation is that, given the high correlation between intrinsic and extrinsic rewards, the actor initialized by LBS performs better at the beginning of FT, speeding up adaptation. H HYPERPARAMETERS Most of the hyperparameters we used for world-model training are the same as in the original DreamerV2 work (Hafner et al., 2021). Specific details are as outlined here: For the pure MPC-based experiments, we increased the number of MPPI samples from 512 to 1000, the number of top-k from 64 to 100, and the horizon from 5 to 15, to compensate for the absence of the actor network’s samples and the critic’s predictions in the return estimates.
1. What is the focus of the paper in terms of unsupervised reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its empirical performance? 3. What are the weaknesses of the paper regarding its novelty and potential applications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The method studies the generalization capabilities of unsupervised RL empirically, and develops a hybrid planner with strong asymptotic performance and high sample efficiency in standard benchmarks. Strengths And Weaknesses Strength This work offers a large-scale benchmarks for different URL techniques and compare the usefulness of different pre-trained components. The developed method has strong empirical performance. Weaknesses The main weakness is novelty since most components in this work come from existing works. The paper pinpoints two decisions to make to apply the proposed framework for tasks outside URLB in Section 3.4. Within URLB, these decisions are made given empirical benchmarking results. But more insights or explanations of when and why different techniques work in different settings, and how much alignmenent of the pretraining and downstream task the proposed mechanism can handle, will further strengthen the contribution of this work. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and easy to follow. The method itself is built upon existing methods in the literature, but novelty itself is not a limitation given the strong empirical performance.
ICLR
Title Unsupervised Model-based Pre-training for Data-efficient Control from Pixels Abstract Controlling artificial agents from visual sensory data is an arduous task. Reinforcement learning (RL) algorithms can succeed in this but require large amounts of interactions between the agent and the environment. To alleviate the issue, unsupervised RL proposes to employ self-supervised interaction and learning, for adapting faster to future tasks. Yet, whether current unsupervised strategies improve generalization capabilities is still unclear, especially in visual control settings. In this work, we design an unsupervised RL strategy for data-efficient visual control. First, we show that world models pre-trained with data collected using unsupervised RL can facilitate adaptation for future tasks. Then, we analyze several design choices to adapt faster, effectively reusing the agents’ pre-trained components, and planning in imagination, with our hybrid planner, which we dub Dyna-MPC. By combining the findings of a large-scale empirical study, we establish an approach that strongly improves performance on the Unsupervised RL Benchmark, requiring 20× less data to match the performance of supervised methods. The approach also demonstrates robust performance on the Real-Word RL benchmark, hinting that the approach generalizes to noisy environments. 1 INTRODUCTION Modern successes of deep reinforcement learning (RL) have shown promising results for control problems (Levine et al., 2016; OpenAI et al., 2019; Lu et al., 2021). However, training an agent for each task individually requires a large amount of task-specific environment interactions, incurring huge redundancy and prolonged human supervision. Developing algorithms that can efficiently adapt and generalize to new tasks has hence become an active area of research in the RL community. In computer vision and natural language processing, unsupervised learning has enabled training models without supervision to reduce sample complexity on downstream tasks (Chen et al., 2020; Radford et al., 2019). In a similar fashion, unsupervised RL (URL) agents aim to learn about the environment without the need for external reward functions, driven by intrinsic motivation (Pathak et al., 2017; Burda et al., 2019a; Bellemare et al., 2016). Any learned models can then be adapted to downstream tasks, aiming to reduce the required amount of interactions with the environment. Recently, the Unsupervised RL Benchmark (URLB) (Laskin et al., 2021) established a common protocol to compare self-supervised algorithms across several domains and tasks from the DMC Suite (Tassa et al., 2018). In the benchmark, an agent is allowed a task-agnostic pre-training stage, where it can interact with the environment in an unsupervised manner, followed by a fine-tuning stage where, given a limited budget of interactions with the environment, the agent should quickly adapt for a specific task. However, the results obtained by Laskin et al. (2021) suggest that current URL approaches may be insufficient to perform well on the benchmark, especially when the inputs of the agent are pixel-based images. World models have proven highly effective for solving RL tasks from vision both in simulation (Hafner et al., 2021; 2019a) and in robotics (Wu et al., 2022), and they are generally data-efficient as they enable learning behavior in imagination (Sutton, 1991). Inspired by previous work on exploration (Sekar et al., 2020), we hypothesize this feature could be key in the unsupervised RL setting, as a pre-trained world model can leverage previous experience to learn behavior for new tasks in imagination, and in our work, we study how to best exploit this feature. We adopt the URLB setup to perform a large-scale study, involving several unsupervised RL methods for pre-training model-based agents, different fine-tuning strategies, and a new improved algorithm for efficiently planning with world models. The resulting approach, which combines the findings of our study, strongly improves performance on the URL benchmark from pixels, nearly achieving the asymptotic performance of supervised RL agents, trained with 20x more task-specific data, and bridging the gap with low-dimensional state inputs (Laskin et al., 2021). Contributions. This work does not propose a novel complex method. Rather, we study the interplay of various existing components and propose a novel final solution that outperforms existing state of the art on URLB by a staggering margin. Specifically: • we demonstrate that unsupervised RL combined with world models can be an effective pre-training strategy to enable data-efficient visual control (Section 3.1), • we study the interplays between the agent’s pre-trained components that improve sample efficiency during fine-tuning (Section 3.2), • we propose a novel hybrid planner we call Dyna-MPC, which allows us to effectively combine behaviors learned in imagination with planning (Section 3.3), • combining our findings into one approach, we outperform previous approaches on URLB from pixels, nearly solving the benchmark (Section 4.1), • we show the approach is resilient to environment perturbations, evaluating it on the Real World RL benchmark (Dulac-Arnold et al., 2020) (Section 4.2), • we present an extensive analysis of the pre-trained agents, aimed at understanding in-depth the current findings and limitations (Section 4.3). An extensive empirical evaluation, supported by more than 2k experiments, among main results, analysis and ablations, was used to carefully design our method. We hope that our large-scale evaluation will inform future research towards developing and deploying pre-trained agents that can be adapted with considerably less data to more complex/realistic tasks, as it has happened with unsupervised pre-trained models for vision (Parisi et al., 2022) and language (Ahn et al., 2022). 1 2 PRELIMINARIES Reinforcement learning. The RL setting can be formalized as a Markov Decision Process (MDP), denoted with the tuple {S,A, T,R, γ}, where S is the set of states, A is the set of actions, T is the state transition dynamics, R is the reward function, and γ is a discount factor. The objective of an RL agent is to maximize the expected discounted sum of rewards over time for a given task, also called return, and indicated as Gt = ∑T k=t+1 γ (k−t−1)rk. In continuous-action settings, you can learn an actor, i.e. a model predicting the action to take from a certain state, and a critic, i.e. a model that estimates the expected value of the actor’s actions over time. Actor-critic algorithms can be combined with the expressiveness of neural network models to solve complex continuous control tasks (Haarnoja et al., 2018; Lillicrap et al., 2016; Schulman et al., 2017). 1The PyTorch code for the experiments will be open-sourced upon publication. Unsupervised RL. In this work, we investigate the problem of fast adaptation for a downstream task, after a phase of unsupervised training and interaction with the environment. Our training routine, based on the setup of URLB (Laskin et al., 2021), is made of two phases: a pre-training (PT) phase, where the agent can interact with a task-agnostic version of the environment for up to 2M frames, and a fine-tuning phase (FT), where the agent is given a task to solve and a limited budget of 100k frames. During the PT phase, rewards are removed so that sensible information about the environment should be obtained by exploring the domain-dependent dynamics, which is expected to remain similar or unchanged in the downstream tasks. During FT, the agent receives task-specific rewards when interacting with the environment. As the agent has no prior knowledge of the task, it should both understand the task and solve it efficiently, in a limited interaction budget. In this setting, the performance of unsupervised model-free RL (Yarats et al., 2022) were shown to be insufficient as reported in (Laskin et al., 2021). We believe the key reason for this is that model-free RL algorithms can exploit only a little part of the information obtained with self-supervised interaction, as they rely uniquely on actor and critic’s predictions. World models. In this work, we ground upon the DreamerV2 agent (Hafner et al., 2021), which learns a world model (Ha & Schmidhuber, 2018; Hafner et al., 2019b) predicting the outcomes of actions in the environment. The dynamics is captured into a latent space Z , providing a compact representation of the high-dimensional inputs. The world model consists of the following components: Encoder: et = fϕ(st), Decoder: pϕ(st|zt), Dynamics: pϕ(zt|zt−1, at−1), Posterior: qϕ(zt|zt−1, at−1, et). The model states zt have both a deterministic component, modeled using the recurrent state of a GRU (Chung et al., 2014), and a (discrete) stochastic component. The encoder and decoder are convolutional neural networks (CNNs) and the remaining components are multi-layer perceptrons (MLPs). The world model is trained end-to-end by optimizing an evidence lower bound (ELBO) on the log-likelihood of the data collected in the environment (Hafner et al., 2019b;a). For the encoder and the decoder networks, we used the same architecture as in Hafner et al. (2021). For control, the agent learns latent actor πθ(at|zt) and critic vψ(zt) networks. Both components are trained online within the world model, by imagining the model state outcomes of the actions produced by the actor, using the model dynamics. Rewards for imagined trajectories are provided by a reward predictor, pϕ(rt|zt) trained to predict environment rewards, and they are combined with the critic predictions to produce a GAE-λ estimate of the returns (Schulman et al., 2016). The actor maximizes estimates of returns, backpropagating gradients through the model dynamics. The hyperparameters for the agent, which we keep fixed across all domains/tasks, can be found in Appendix H. 3 UNSUPERVISED MODEL-BASED PRE-TRAINING FOR DATA-EFFICIENT CONTROL FROM PIXELS To best exploit self-supervised pre-training for data-efficient adaptation, it is important that the agent: (i) meaningfully interacts with the environment during the PT phase, to discover useful transitions; (ii) successfully reuses the modules learned during PT for fast adaptation; and (iii) efficiently employs the FT phase to quickly understand and master the downstream task. In this section, we use an experiment-driven approach to find which methods or components are best at tackling these challenges. Experimental procedure. We employ the URL benchmark that consists of three control domains, Walker, Quadruped and Jaco, and twelve tasks, four per domain. To evaluate the agents, we take snapshots of the agent at different times during training, i.e. 100k, 500k, 1M, and 2M frames, and finetune the agent for 100k frames. In all bar plots, we show average normalized returns on downstream tasks with error bars showing the standard deviation. To normalize results in a comparable way for all tasks, we train a fully-supervised agent with 2M frames per task. We use the mean performance of this agent, which we refer to as "oracle", as the reference scores to normalize our results in the plots (details in Appendix A). For all experiments, results are presented with at least three random seeds. 3.1 UNSUPERVISED PRE-TRAINING In the PT stage, unsupervised RL can be used to explore the environment, collecting the data to train the components of the agent. The resulting networks are then used to initialize respective components in the agent deployed for the downstream task, aiming to reduce sample complexity during FT. The first question we address is thus "What kinds of agents work best with unsupervised pre-training?". Unsupervised RL methods can be grouped into three categories (Laskin et al., 2021): knowledgebased, which aim to increase the agent’s knowledge by maximizing error prediction (Pathak et al., 2017; 2019; Burda et al., 2019b), data-based, which aim to achieve diversity of data (Yarats et al., 2021; Liu & Abbeel, 2021b) and competence-based, which aim to learn diverse skills (Liu & Abbeel, 2021a; Eysenbach et al., 2019). In Figure 2a we report the results from Laskin et al. (2021), showing that none of these approaches is particularly effective on URLB when combined with the DrQ model-free agent (Yarats et al., 2022), state-of-the-art in RL from pixels, where the data collected with unsupervised RL is used to pre-train the agent’s actor, critic, and encoder. To demonstrate that world models can be used to effectively exploit unsupervised RL data collection for fast adaptation, we study multiple approaches and use them to pre-train the Dreamer’s world model and latent actor. As knowledge-based methods we employ ICM (Pathak et al., 2017), LBS (Mazzaglia et al., 2021b), Plan2Explore (P2E; (Sekar et al., 2020)), and RND (Burda et al., 2019b). As a data-based approach, we choose APT (Liu & Abbeel, 2021b), and as competence-based approaches, we adopt DIAYN (Eysenbach et al., 2019) and APS (Liu & Abbeel, 2021a). Finally, we also test random actions, as a naive maximum entropy baseline (Haarnoja et al., 2018). Details on these methods and how we combined them with the Dreamer algorithm are discussed in Appendix B. Aggregating results per category, in Figure 2b, we show that by leveraging a pre-trained world model the overall performance improves over time for all categories, as opposed to the model-free results, where only knowledge-based approaches slightly improve. In particular, data-based and knowledge-based methods are more effective in the Walker and Quadruped domains, and random actions and competence-based are more effective in the Jaco domain. Detailed results for each method are available in Appendix E. 3.2 FINETUNING PRE-TRAINED AGENTS Some of the components learned during the PT phase, such as the world model, can be reused for fast adaptation during FT. However, as the reward is changing from pseudo-reward to task reward when changing from the PT to the FT phase, it is not clear if pre-training of the actor and critic can help the downstream task. To shed light on this, we seek to answer: "Which pre-trained components are useful for downstream tasks?". Here, we test different fine-tuning configurations, where we copy the weights of some of the PT components into the agent to fine-tune for the downstream task. We run the tests for the several unsupervised RL methods combined with Dreamer that we presented in Section 3.1 and show aggregated results in Figure 3 (detailed results per each method in Appendix E). Overall, fine-tuning the PT world model provides the most significant boost in performance, strengthening the hypothesis that world models are very effective with unsupervised RL. Fine-tuning the actor improves performance slightly in Walker and remarkably in Quadruped, but is harmful in the Jaco domain. An intuitive explanation is that in the Quadruped and Walker moving tasks, the exploratory behaviors help discovering reward faster. Instead, in the Jaco goal-reaching tasks, the agent needs to reach a certain target with sparse rewards. If the PT actor is initialized to move far from the target, the agent might struggle to find rewards in the small FT budget. Finally, using a PT critic is systematically worse. This can be explained by the discrepancy between intrinsic rewards and task rewards. 3.3 LEARNING AND PLANNING IN IMAGINATION Knowing a model of the environment, traditional model-based control approaches, e.g. model predictive control (MPC) (Williams et al., 2015; Chua et al., 2018; Richards, 2005), can be used to plan the agent’s action. Nonetheless, using actor-critic methods has several advantages, such as amortizing the cost of planning by caching previously computed (sub)optimal actions and computing long-term returns from a certain state, without having to predict outcomes that are far in the future. More recent hybrid strategies, such as LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022), allow combining trajectories sampled from the actor with trajectories sampled from a distribution over actions that is iteratively improved. The model and the critic are used to evaluate the trajectories, Algorithm 1 Dyna-MPC Require: Actor θ, Critic ψ, World Model ϕ 1: µ, σ: initial parameters for sampling actions 2: N,Nπ: num trajectories, num policy trajectories 3: zt, H: current model state, planning horizon 4: for each iteration j = 1..J do 5: Sample N trajectories of length H from N (µ, σ2I), starting from zt 6: Sample Nπ trajectories of length H using the actor πθ, starting from zt 7: Estimate future states, using the model, and returns, using reward and critic predictions 8: Update µ and σ using MPPI (Williams et al., 2015) 9: end for 10: return at ∼ N (µt, σ2t I) improve them, and eventually select the most promising actions, i.e. planning. In this section, we answer the question: Can we accelerate downstream task adaptation by leveraging planning? Dyna-MPC. As we pre-train a world model, we could exploit planning in latent space to adapt with limited additional environment interaction. One problem with the above strategies is that they are based upon learning off-policy actor and critic, which in our context would prevent us from exploiting the PT model to learn the actor and critic in imagination. In order to enable hybrid planning with the behavior learned in imagination (Hafner et al., 2019a), we develop a modification of these approaches, which we call Dyna-MPC, that combines the actor and critic learned in imagination with MPPI (Williams et al., 2015) for planning. As detailed in Algorithm 1, at each time step, we imagine a set of latent trajectories using the model, by sampling actions from a time-dependent multivariate gaussian and from the actor policy, trained with Dreamer in imagination. Returns for MPPI are estimated using reward predictions by the model and the critic. MPPI is used to update the parameters of the multivariate gaussian for J iterations. Details on how returns are estimated and the MPPI updates work are given in Appendix C. One significant difference with previous approaches is that the policy in Dyna-MPC is learned on-policy in imagination, thus no correction for learning off-policy is required (Sikchi et al., 2020). Given the insights from the previous section, we use the world models and actors pre-trained with all the different unsupervised strategies we considered (see Section 3.1)2 and test their FT performance with and without planning with Dyna-MPC. Aggregated scores are reported in Figure 4, and detailed results for each method are available in Appendix E. We observe that adopting Dyna-MPC is always beneficial, as it improves the average performance and reduces variance in all domains. 3.4 OUR METHOD: COMBINING THE FINDINGS TOGETHER In the large-scale study, we explored several design choices to establish the most adequate approach to tackle the URL benchmark, aiming to provide a general recipe for data-efficient adaptation thanks to unsupervised RL. Our approach combines the main findings we presented in the previous sections: 1. learning a model-based agent with data collected using unsupervised RL (Figure 2); 2. fine-tuning the PT world model (always) and the pre-trained actor (where beneficial), while learning the critic from scratch (Figure 3); 3. adopting a hybrid planner, as the proposed Dyna-MPC, to leverage both learning and planning in imagination (Figure 4). An overview of the method is illustrated in Figure 1 and the algorithm is presented in Appendix D. We believe the above recipe could be generally applied to unsupervised settings, also outside of URLB, with the precaution that one should carefully make two decisions: (a) whether fine-tuning the PT actor is meaningful for the downstream task or it’s better to re-learn it from scratch, (b) what is the best URL strategy to collect data. Both decisions strongly depend on the target domain/task and so it is difficult to assess their implications beforehand. However, adopting unsupervised strategies that specifically focus on interacting with interesting elements of the environment, e.g. objects, or that quickly explore large areas of the environment at the beginning of fine-tuning may help exploring and revisiting crucial states of the environment more easily (Parisi et al., 2021). For URLB, we already established (a) that the PT actor is effective in Walker and Quadruped tasks, but it is better re-learn the actor from scratch in Jaco, in Section 3.2. To decide which URL strategy to use (b) we present a detailed comparison of the performance of our approach using different exploration strategies. The results in Figure 5 show that the agent using LBS during pre-training performs overall best, as it has the highest interquartile mean (IQM) and mean scores, and the lowest optimality gap. Thus, in the evaluation section, we present Ours (LBS) as our approach. 4 EVALUATION AND ANALYSIS 4.1 UNSUPERVISED REINFORCEMENT LEARNING BENCHMARK In Section 3, we presented our approach, which combines the findings from our empirical large-scale study on URLB. In Figure 6, we compare the results from the original URLB paper with our approach. The performance of our method is superior in all domains. The second strongest method (DrQ with Disagreement) approaches an overall performance of 40% of the respective supervised baseline performance, while our method recovers more than 90% of its supervised counterpart. 4.2 REAL-WORLD REINFORCEMENT LEARNING BENCHMARK Algorithms developed in simulation struggle to transfer to real-world systems due to a series of implicit assumptions that are rarely satisfied in real environments, e.g. URLB assumes the dynamics between PT and FT stay the same. The RWRL benchmark (Dulac-Arnold et al., 2020) considers several challenges that are common in real-world systems and implements them on top of DMC tasks. We employ vision-based variants of the Walker Walk and Quadruped Walk tasks from the RWRL benchmark. These tasks introduce system delays, stochasticity, and perturbations of the robot’s model and sensors, which are applied with three degrees of intensity to the original environment, i.e. ‘easy’, ‘medium’, and ‘hard’ (details in Appendix F). We seek to answer whether in perturbed settings: • does unsupervised PT enable faster adaptation? • does unsupervised RL provide an advantage over random exploration? • does hybrid planning improve performance, as in URLB? In Figure 7, we present the results of our method, using LBS during PT, with and without planning with Dyna-MPC for FT, and compare to random exploration and training from scratch for 100k, 1M, and 2M frames. Crucially, the PT models are trained in the vanilla task-agnostic version of the environments from the DMC Suite, so that the results highlight the extent to which models trained in ideal conditions generalize to perturbed settings when fine-tuned in a low-data regime. 2We exceptionally do not use the pre-trained actor in the Jaco tasks, as this was shown to lead to better performance in Section 3.2 (Figure 3). Overall, we found that fine-tuning PT models offer an advantage over training from scratch for 100k frames, despite all the variations in the environment. Furthermore, on the Quadruped Easy and Medium settings, our method performs better than Dreamer@1M and not far from Dreamer@2M while using 10x and 20x less task-specific data, respectively. Our method also performs close to Dreamer@1M/2M in the Walker Easy task. Unsupervised RL for data collection (Ours) outperforms random actions in the ‘easy’ and ‘medium’ settings, showing that a better PT model yields higher FT performance, even when the dynamics of the downstream task is affected by misspecifications and noisy factors. Finally, in contrast with the findings on URLB, adopting the hybrid planner is not generally beneficial. We believe this is because the model’s predictions are less certain and precise in this setting and thus cannot inform the short-term planner accurately. 4.3 EXTENDED ANALYSIS To better analyze the learned components, we conducted a range of additional experiments. For conciseness, detailed descriptions of the experimental settings are deferred to Appendix G and we briefly summarize the takeaways in this section. Learning rewards online. We verify whether having to discover and learn the reward function during FT impacts performance. In Figure 8, we compare against agents that (violating the URLB settings) know the task in advance and can pre-train a reward predictor during the PT stage. We see that learning the reward predictor does not affect performance significantly for dense-reward tasks, such as the Walker and Quadruped tasks. However, in sparser reward tasks, i.e. the Jaco ones, knowing reward information in advance provides an advantage. Efficient strategies to find sparse rewards efficiently represent a challenge for future research. More details in Appendix G.1. Zero-shot adaptation. Knowing a reward predictor from PT, it could be possible to perform zero-shot control with MPC methods if the model and the reward function allow it. In Figure 9, we show that despite the zero-shot MPC (ZS) offers an advantage over Dreamer@100k, the FT phase is crucial to deliver high performance on the downstream tasks, as the agent uses this phase to collect missing information about the environment and the task. Further details in Appendix G.2. Latent dynamics discrepancy (LDD). We propose a novel metric, Latent Dynamics Discrepancy, which evaluates the distance between the latent predictions of the PT model and the same model after FT on a task. In Figure 10, we show the correlation between our metric and the performance ratio between using the PT model and the FT model for planning (see Appendix G.3 for a detailed explanation). We observed a strong negative Pearson correlation (−0.62, p-value: 0.03), highlighting that major updates in the model dynamics during FT played an important role in improving performance. Unsupervised rewards and performance. We analyze the correlation between the normalized performance of different agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent in Table 1. In particular, the correlation for LBS, which overall performs best in URLB, has a statistical significance, as its p-value is < 0.05. We believe this correlation might be one of the causes of LBS outstanding performance. Further insights are provided in Appendix G.4. 5 RELATED WORK Model-based control. Dynamics models combined with powerful search methods have led to impressive results on a wide variety of tasks such as Atari (Schrittwieser et al., 2020) and continuous control (Hafner et al., 2019a; Janner et al., 2019; Sikchi et al., 2021; Lowrey et al., 2018). LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022) combine temporal difference learning and MPC. The model proposed with TD-MPC is task-oriented and thus requires a task to accelerate learning. In our work, we focus on unsupervised model learning, grounding on the DreamerV2 model (Hafner et al., 2021), whose supervision comes from predicting the environment’s observations. Methods that use no reconstruction could generalize better to visual differences (Mazzaglia et al., 2021a; Ma et al., 2020) but they lose in explainability, as they cannot decode imagined trajectories. Unsupervised RL. Prior to our work, the large-scale study of curiosity (Burda et al., 2018) provided an insightful analysis of the performance of knowledge-based methods in the reward-free setting. In our work, we leverage the URLB setting, to provide an analysis of a combination of model-based control techniques with unsupervised RL. This allowed us to formulate a strategy to adapt pre-trained models to visual control tasks in a data-efficient manner. Closely, Sekar et al. (2020) combines adapts the Disagreement (Pathak et al., 2019) to work with Dreamer (Hafner et al., 2019a). In our work, in addition to analyzing a wider choice of unsupervised RL strategies, we show how to better exploit the agent PT components for adaptation, and we propose a hybrid planner to improve data-efficiency. Transfer learning. In the field of transfer learning, fine-tuning is the most used approach. However, fine-tuning all the pre-trained agent components may not be the most effective strategy. In transfer learning for RL, they have studied this problem, mainly with the objective of transferring from one environment to another (Farebrother et al., 2018; Sasso et al., 2022; van Driessel & Francois-Lavet, 2021). Instead, we analyze which agent’s components should be transferred from the unsupervised PT stage to the supervised FT stage when the environment’s dynamics is assumed to stay similar or be the same. Another stream of work has studied successor representations, to enable a better transfer of the agent’s actor-critic (Hansen et al., 2020; Barreto et al., 2016). 6 CONCLUSION In order to accelerate the development and deployment of learning agents for real-world tasks, it is crucial that the employed algorithms can adapt in a data-efficient way for multiple tasks. Our study provides an empirical analysis of several design choices, which allowed us to obtain near-optimal performance in URLB and that showed robustness to perturbations in the environment, on the RWRL benchmark. We also analyzed several aspects of the learned models, to understand what could be improved further in the future to ease the adaptation process. Limitations. In the Jaco reaching tasks, we found that a bad initialization of the pre-trained actor can actually harm the agent’s performance. While competence-based approaches should address this limitation, by learning a variety of skill behaviors, their performance on the other domains has been subpar. Future work should aim to find a more general approach to pre-train behavior for fast adaptation or improve the exploration capabilities of competence-based approaches. Another issue we encountered, on the RWRL benchmark, is that if the environment introduces too intense perturbations during adaptation, relying on the predictions of the adopted world model becomes problematic, to the extent that exploiting a planner is not useful anymore. Developing more resilient models that can be trained in an unsupervised fashion and used for data-efficient planning, even in presence of complex perturbations, will be the focus of future studies. Reproducibility statement We reported in the main text (Algorithm 1) the pseudo-code for DynaMPC and in Appendix D the pseudo-code for our end-to-end approach. We also provide instructions on how we implemented our methods (Appendix B) and all the model and training hyperparameters to implement and reproduce the results (Table 4). We will release our code and scripts. A NORMALIZATION SCORES In Table 2, we report the mean scores for the URLB Expert, used to normalize the scores in the URLB paper, and for Dreamer@2M, which we use to normalize returns of our methods, where both supervised baselines have been trained individually on each of the 12 tasks from URLB for 2M frames. We additionally report mean and standard deviations for the best performing unsupervised baseline from URLB. which is Disagreement (Pathak et al., 2019), and our method (using LBS for data collection). We notice that our scores approach the Dreamer@2M’s scores in several tasks, eventually outperforming them in a few tasks (e.g. Walker Flip, Quadruped Jump). We believe this merit is due both to the exploration pre-training, which may have found more rewarding trajectories than greedy supervised RL optimization and of the improved Dyna-MPC planning strategy. B INTEGRATING UNSUPERVISED RL STRATEGIES We summarize here the unsupervised RL approaches tested and how we integrated them with the Dreamer algorithm for exploration. For all methods, rewards have been normalized during training using an exponential moving average with momentum 0.95, with the exceptions of RND, which follows its original reward normalization (Burda et al., 2019b), and APS, whose rewards are not normalized because they are used to regress the skill that is closer to the downstream task during FT. ICM. The Intrinsic Curiosity Module (ICM; Pathak et al. (2017)) defines intrinsic rewards as the error between states projected in a feature space and a feature dynamics model’s predictions. We use the Dreamer agent encoder et = fϕ(st) to obtain features and train a forward dynamics model g(et|et−1, at−1) to compute rewards as: rt ICM ∝ ∥g(et|et−1, at−1)− et∥2. As the rewards for ICM require environment states (going through the encoder to compute prediction error), we train a reward predictor to allow estimating rewards in imagination. Plan2Explore. The Plan2Explore algorithm (Sekar et al., 2020) is an adaptation of the Disagreement algorithm (Pathak et al., 2019) for latent dynamics models. An ensemble of forward dynamics models is trained to predict the features embedding et = fϕ(st), given the previous latent state and actions, i.e. g(et|zt−1, at−1, wk), where wk are the parameters of the k-th predictor. Intrinsic rewards are defined as the variance of the ensemble predictions: rt P2E ∝ Var({g(et|zt−1, at−1, wk)|k ∈ [1, ...,K]}). Plan2Explore requires only latent states and actions, thus it can be computed directly in imagination. We used an ensemble of 5 models. RND. Random Network Distillation (RND; Burda et al. (2019b)) learns to predict the output of a randomly initialized network n(st) that projects the states into a more compact random feature space. As the random network is not updated during training, the prediction error should diminish for already visited states. The intrinsic reward here is defined as: rt RND ∝ ∥g(st)− n(st)∥2 As the rewards for RND requires environment states (to encode with the random network), we train a reward predictor to allow estimating rewards in imagination. LBS. In Latent Bayesian Surprise (LBS; Mazzaglia et al. (2021b)), they use the KL divergence between the posterior and the prior of a latent dynamics model as a proxy for the information gained with respect to the latent state variable, by observing new states. Rewards are computed as: rt LBS ∝ DKL[q(zt|zt−1, at−1, et)∥p(zt|zt−1, at−1)] As the rewards for LBS requires environment states (to compute the posterior distribution), we train a reward predictor to allow estimating rewards in imagination. APT. Active Pre-training (APT; Liu & Abbeel (2021b)) uses a particle-based estimator based on the K nearest-neighbors algorithm (Singh et al., 2003) to estimate entropy for a given state. We implement APT on top of the deterministic component of the latent states z̄t, providing rewards as: rt APT ∝ k∑ i log ∥z̄t − z̄it∥2, where k are the nearest-neighbor states in latent space. As APT requires only latent states, it can be computed directly in imagination. We used k = 12 nearest neighbors. DIAYN. Diversity is All you need (DIAYN; Eysenbach et al. (2019)) maximizes the mutual information between the states and latent skills w. We implement DIAYN on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(wt)−H(wt|zt). The entropy H(wt) is kept maximal by sampling wt ∼ Unif(wt) from a discrete uniform prior distribution, while H(wt|zt) is estimated learning a discriminator q(wt|zt). We compute intrinsic rewards as: rt DIAYN ∝ log q(wt|zt) Additionally, DIAYN maximizes the entropy of the actor, so we add an entropy maximization term to Dreamer’s objective (Haarnoja et al., 2018). As DIAYN requires model states and skills sampled from a uniform distribution to compute rewards, we can directly compute them in imagination. For FT, the skill adapted is the one with the highest expected rewards, considering the states and rewards obtained in the initial episodes. APS. Active Pre-training with Successor features (APS; Liu & Abbeel (2021a)) maximizes the mutual information between the states and latent skills w. We implement APS on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(zt)−H(zt|wt). The entropy term H(zt) is estimated using a particle-based estimator on top of the deterministic component of the latent states z̄t, as for APT, while the term H(zt|wt) is estimated learning a discriminator q(zt|wt). The intrinsic rewards for APS can be written as: rt APS ∝ rtAPT + log q(wt|zt) As APS requires model states and uniformly sampled skills to compute rewards, we can directly compute them in imagination. For FT, the skill to adapt is selected using linear regression over the states and rewards obtained in the initial episodes (Liu & Abbeel, 2021a). C DYNA-MPC To further improve data efficiency, we chose to use an hybrid planner that combines reinforcement learning and MPC (Hansen et al., 2022; Sikchi et al., 2020; Lowrey et al., 2018). Previous works leveraged model-free off-policy algorithms (Hansen et al., 2022; Sikchi et al., 2020) to learn the actor and critic in a more computationally efficient manner. The policy used to act on the environment combines action samples from the actor network with MPC, while the critic and the actor are learned "offline" from previously collected data. This has several benefits but also leads to an issue referred to as “actor divergence" (Sikchi et al., 2020), which consists of the policy used for data collection being different from the policy that is used to learn the critic. In our study, we found that using the PT world model to learn the actor and the critic is crucial to improve data-efficiency during FT (see Figure 3). Thus, we discard the option of learning the actor and critic with off-policy deep RL. Instead, we design a new hybrid planner, which we call Dyna-MPC, that learns actor and critic functions in the model imagination (Sutton, 1991), using the Dreamer algorithm (Hafner et al., 2019a), and then combines their predictions with MPPI (Williams et al., 2015) for acting on the environment. By doing so we mitigate the "actor divergence" issue as actor and critic are learned on-policy on the trajectories generated with the model. The critic is learned in the model’s imagination, computing the expected value of the actor’s actions using GAE-λ estimates of the returns (Schulman et al., 2016; Hafner et al., 2019a): V λt = rt + γt { (1− λ)vψ(zt+1) + λV λt+1 if t < H, vψ(zH) if t = H, (1) where rt is the reward for state zt, yielded by the reward predictor of the world model, and H is the imagination horizon. When computing returns for MPPI we use the same return estimates. At each time step, we use MPPI to select the best action. MPPI iteratively fits the parameters of a time-dependent multivariate Gaussian distribution with diagonal covariance, updating mean and standard deviation parameters using an importance weighted average of the top-k trajectories with the highest estimated returns. At every step, N trajectories Γi = {a0,i, a1,i, ..., aH,i} of length H are obtained sampling actions from the distributions at ∼ N (µt, σ2t I) and Nπ trajectories are sampled from the actor network at ∼ πθ(at|zt) and their outcomes are predicted using the model. At each MPPI iteration, the distribution parameters are updated as follows: µ = ∑k i=1 ΩiΓ ⋆ i∑N i=1 Ωi , σ = max √√√√∑Ni=1 Ωi(Γ⋆i − µ)2∑N i=1 Ωi , ϵ , (2) where Ωi = exp(τV λi ), τ is a temperature parameter, ⋆ indicates the trajectory is in the top-k, and ϵ is a clipping factor to avoid too small standard deviations (Hansen et al., 2022). To reduce the number of iterations required for convergence, we reuse the 1-step shifted mean obtained at the previous timestep (Argenson & Dulac-Arnold, 2020). D ALGORITHM Algorithm 2 Unsupervised Model-based Pre-Training for Data-efficient Control from Pixels Require: Actor θ, Critic ψ, World Model ϕ 1: Intrinsic reward rint, extrinsic reward rext 2: Environment, M , downstream tasks Tk, k ∈ [1, . . . ,M ] 3: Pre-train frames NPT, fine-tune frames NFT, environment frames/update τ 4: Initial model state z0, hybrid planner Dyna-MPC, replay buffers DPT, DFT 5: 6: // Pre-training 7: for t = 0, . . . , NPT do 8: Draw action from the actor, at ∼ πθ(at|zt) 9: Apply action to the environment, st+1 ∼ P (·|st,at) 10: Add transition to replay buffer, DPT ← DPT ∪ (st,at, st+1) 11: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 12: if t mod τ = 0 then 13: Update world model parameters ϕ on the data from the replay buffer DPT 14: Update actor-critic parameters {θ, ψ} in imagination, maximizing rint 15: end if 16: end for 17: Output pre-trained parameters {ψPT, θPT, ϕPT} 18: 19: // Fine-tuning 20: for Tk ∈ [T1, . . . , TM ] do 21: Initialize fine-tuning world-model with ϕPT 22: (Optional) Initialize fine-tuning actor with θPT 23: for t = 0, . . . , NFT do 24: Draw action from the actor, at ∼ πθ(at|zt) 25: Use the planner for selecting best action, at ∼ Dyna-MPC(zt) 26: Apply action to the environment, st+1, rextt ∼ P (·|st,at) 27: Add transition to replay buffer, DFT ← DFT ∪ (st,at, rextt , st+1) 28: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 29: if t mod τ = 0 then 30: Update world model parameters ϕ on the data from the replay buffer DFT 31: Update actor-critic parameters {θ, ψ} in imagination, maximizing rext 32: end if 33: end for 34: Evaluate performance on Tk 35: end for E ADDITIONAL RESULTS We present complete results, for each unsupervised RL method, for the large-scale study experiments presented in Section 3. Can a pre-training stage longer than 2M frames be beneficial? In Figure 14, we report FT results with our full method, every 1M frames up to 5M PT frames. The aggregated results show that, adopting our method, longer PT can increase performance further, especially until 4M steps. The performance in all domains keeps increasing or remains steady until 5M steps, with two exceptional cases, Walker for Plan2Explore and Jaco for APS, where performance drops between 4M and 5M steps. For these experiments, we kept the size of the model and all the hyperparameters unvaried with respect to the 2M PT frames experiments but we increased the replay buffer maximum size to 5M frames. Increasing model capacity, and adopting additional precautions, such as annealing learning rate, it is possible that the agent could benefit even more from longer pre-training and we aim to analyse this more in details for future work. F RWRL SETTINGS We take the Quadruped and Walker tasks from the RWRL benchmark and replace the low-dimensional sensor inputs with RGB camera inputs. While this removes some of the perturbations planned in the benchmark (Dulac-Arnold et al., 2020), such as noise in the sensors, it introduces the difficulty of a different dynamics in pixel space (due to the other perturbations), compared to the one observed during pre-training in the vanilla simulation environment. G EXTENDED ANALYSIS We note that, to run the experiments faster, we did not use Dyna-MPC for the extended analysis. Furthermore, the Jaco tasks used slightly differ from the original ones in URLB, only in that the target to reach cannot move. This allows consistency of the reward function between PT and FT, so that a reward predictor can be trained on ‘reward-labelled’ PT data. However, because of this change, the performance in Jaco may differ from the other main results (particularly in Figure 8 and Figure 9). G.1 LEARNING REWARDS ONLINE In Figure 8 of the main text, we measure the gap in performance between pre-trained agents that have no knowledge of the reward function at the beginning of fine-tuning and agents whose reward predictor is initialized from a reward predictor learned on top of the unsupervised pre-training data (violating the URLB settings). Crucially, the agent during unsupervised PT can learn the reward predictor without affecting neither the model learning or the exploration process. To not affect the model, gradients are stopped between the reward predictor and the rest of the world model. To not affect exploration, the rewards used to train the agent’s actor and critic remain the intrinsic rewards, for exploration. G.2 ZERO-SHOT ADAPTATION Using agents that have access to a PT reward predictor, we explore the idea of zero-shot adaptation using MPC, which is trying to solve the URLB tasks using only planning and the pre-trained world model and reward predictor. In order to obtain good performance, this assumes that the model correctly learned the dynamics of the environment and explored rewarding transitions that are relevant to the downstream task, during pre-training. In Figure 9 of the main text, we compare the results of performing MPC in a zero-shot setting (ZS) with the performance of an MPC agent that is allowed 100k frames for fine-tuning (FT). As for the MPC method, we employ MPPI (Williams et al., 2015). Because these experiments are particularly expensive to run, we just them on the agents trained with the Plan2Explore URL approach. We observe that the performance of zero-shot MPC is generally weak. While it overall performs better than the non-pre-trained model, simply applying MPC leveraging the pre-trained world model and reward predictor trained on the pre-training stage data is not sufficient to guarantee satisfactory performance. The fact that exploiting the fine-tuning stage using the same MPC approach generally boosts performance demonstrates that the model has a major benefit from the FT stage. Still, the performance of MPC generally lacks behind the actor-critic performance, suggesting that, especially in a higher-dimensional action space such as the Quadruped one, amortizing the cost of planning with actor-critic seems crucial to achieve higher performance. G.3 LATENT DYNAMICS DISCREPANCY Model misspecification is a useful measure to assess the uncertainty or inaccuracy of the model dynamics. It is computed as the difference between the dynamics predictions and the real environment dynamics. The metric helps build robust RL strategies, that take the dynamics uncertainty into account while searching for the optimal behavior (Talvitie, 2018). However, with pixel-based inputs the dynamics of the environment are observed through high-dimensional images. And this in-turn could hurt the metric evaluation, since the distances in pixel space can be misleading. In our approach, we use a model-based RL agent that learns the dynamics model in a compact latent space Z . Our novel metric, Latent Dynamics Discrepancy (LDD), quantifies the “misspecification" of the learned latent dynamics accordingly. The metric quantifies the distance between the predictions of the pre-trained model and the same model after fine-tuning on a downstream task. However, as the decoder of the world model gets updated during fine-tuning, the latent space mapping between model states z and environment states s might drift. For this reason, we freeze the agent’s decoder weights, so that the model can only improve the posterior and the dynamics. This ensures that the mapping Z −→ S remains unchanged and allows to compare the dynamics model after fine-tuning with the one before fine-tuning. In order to measure the distance between the distribution output by the dynamics network, we chose the symmetrical Jensen-Shannon divergence: LDD = E(zt,at) [ DJS[pFT(zt+1|zt, at)∥pPT(zt+1|zt, at)] ] , (3) where the expectation is taken over the previous model states zt sampled from the fine-tuned posterior qFT(zt), actions at−1 sampled from an oracle actor π∗(at|zt), so that we evaluate the metric on optimal trajectories, whose environment’s state distribution corresponds to the stationary distribution induced by the actor st ∼ dπ ∗ (st). We used 30 trajectories per task in our evaluation. We observe in our experiments that there exists a correlation between the metric and the performance ratio between a zero-shot model and a fine-tuned model (see Figure 10 in the main paper). The key observation is that major updates in the model dynamics during fine-tuning phase played an important role in improving the agent’s performance, compared to the pre-trained model and zero-shot performance. Future research may attempt to reduce such dependency by either improving the model learning process, so that the pre-trained dynamics could have greater accuracy, or the data collection process, proposing URL methods that directly aid to reduce such uncertainty. G.4 UNSUPERVISED REWARDS AND PERFORMANCE We further analyzed the correlation between the normalized performance of the different exploration agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent. A strong negative correlation between the two factors should indicate that the agent is more interested in seeing the optimal trajectories when its performance is low on the task. We observe that there is negative correlation between Plan2Explore (P2E), ICM, LBS’s performance and their intrinsic rewards, while we found∼0 correlation for RND (see Table 1 in the main text). Out of the methods tested, LBS significantly demonstrated the correlation, as its p-value is < 0.05. This is likely one of the key factors for the high performance of the agent using LBS on the benchmark. One possible explanation is that LBS searches for transitions of the environment that are difficult to predict for the dynamics, so the model likely learns those transitions more accurately, facilitating planning during the fine-tuning stage. Another potential explanation is that, given the high correlation between intrinsic and extrinsic rewards, the actor initialized by LBS performs better at the beginning of FT, speeding up adaptation. H HYPERPARAMETERS Most of the hyperparameters we used for world-model training are the same as in the original DreamerV2 work (Hafner et al., 2021). Specific details are as outlined here: For the pure MPC-based experiments, we increased the number of MPPI samples from 512 to 1000, the number of top-k from 64 to 100, and the horizon from 5 to 15, to compensate for the absence of the actor network’s samples and the critic’s predictions in the return estimates.
1. What is the focus of the paper regarding unsupervised reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works like Plan2Explore? 3. Do you have any concerns or recommendations regarding the formulation of the main hypothesis and its relation to previous research? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific questions or aspects that the reviewer would like to know more about or be clarified in the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors study model-based methods for tackling the unsupervised reinforcement learning benchmark (URLB). First, the authors show that substituting DrQ (a model-free agent) with DreamerV2 results in increased performance and scaling on URLB. Second, the authors investigate the transfer of specific components (model, actor, critic) from pre-training to fine-tuning, finding that transferring model and actor is beneficial but additionally transferring critic compromises performance. Third, the authors propose Dyna-MPC, a planning procedure that builds upon model predictive path integral (MPPI) control, using the actor as an additional source of sampled trajectories, and the critic and model to score trajectories. This is shown to further improve fine-tuning. These improvements altogether result in dramatic improvement on URLB. Finally, the authors assess transfer under distribution shifts via the real-world reinforcement learning benchmark (RWRL), and find that their proposed model without Dyna-MPC performs best. Strengths And Weaknesses Strengths The results are particularly strong. The progression of experiments building towards the proposed method is well structured and informative. The set of methods, comparisons, and ablations is extensive. Weaknesses The gains provided by model-based learning are probably significantly inflated due to the simplicity of the three URLB environments, which enables in-imagination training and planning to actually work. In contrast, in Atari, which has far more visual variation per game despite arguably simpler dynamics, model-based methods have still yet to come close to model-free methods, except when the allowed amount of environment interaction or compute is limited. Thus, while impressive, the improvements coming from this work should be taken with a grain of salt, and might not hold for environments with more realistic amounts of variation and complexity. The application of model-based learning for unsupervised pre-training for RL has already been known to be effective (e.g. Plan2Explore). I'm not sure why the URLB authors did not benchmark model-based baselines like Plan2Explore, but regardless, the improvements over the DrQ-based runs is expected given Plan2Explore, especially given the environments in Plan2Explore and URLB are both sourced from the DeepMind Control Suite. Regrettably, it seems that the authors formulated their main hypothesis without realizing the above. I recommend rephrasing the contributions in the abstract and intro to convey that the results in this work corroborate and extend upon the findings first provided by Plan2Explore. Clarity, Quality, Novelty And Reproducibility Clarity The writing was very clear and well-structured. Quality The execution of the work appears sound. The empirical results are impressive. Novelty As the authors admit, the work does not propose a novel method, but rather demonstrates the outsized benefit of applying existing methods to a particular problem setting. As noted above, this benefit is itself not particularly novel, as the point was already previously made in the Plan2Explore work. However, this work does provide quite a few minor novelties over Plan2Explore, including swapping out Dreamer for DreamerV2, assessing using a benchmark not available to Plan2Explore, assessing using a variety of self-supervised reward mechanisms during pre-training, investigating the transfer of various components, the Dyna-MPC planning mechanism, and robustness results on RWRL. Reproducibility I'm satisfied with the level of detail provided for reproducibility purposes.
ICLR
Title Unsupervised Model-based Pre-training for Data-efficient Control from Pixels Abstract Controlling artificial agents from visual sensory data is an arduous task. Reinforcement learning (RL) algorithms can succeed in this but require large amounts of interactions between the agent and the environment. To alleviate the issue, unsupervised RL proposes to employ self-supervised interaction and learning, for adapting faster to future tasks. Yet, whether current unsupervised strategies improve generalization capabilities is still unclear, especially in visual control settings. In this work, we design an unsupervised RL strategy for data-efficient visual control. First, we show that world models pre-trained with data collected using unsupervised RL can facilitate adaptation for future tasks. Then, we analyze several design choices to adapt faster, effectively reusing the agents’ pre-trained components, and planning in imagination, with our hybrid planner, which we dub Dyna-MPC. By combining the findings of a large-scale empirical study, we establish an approach that strongly improves performance on the Unsupervised RL Benchmark, requiring 20× less data to match the performance of supervised methods. The approach also demonstrates robust performance on the Real-Word RL benchmark, hinting that the approach generalizes to noisy environments. 1 INTRODUCTION Modern successes of deep reinforcement learning (RL) have shown promising results for control problems (Levine et al., 2016; OpenAI et al., 2019; Lu et al., 2021). However, training an agent for each task individually requires a large amount of task-specific environment interactions, incurring huge redundancy and prolonged human supervision. Developing algorithms that can efficiently adapt and generalize to new tasks has hence become an active area of research in the RL community. In computer vision and natural language processing, unsupervised learning has enabled training models without supervision to reduce sample complexity on downstream tasks (Chen et al., 2020; Radford et al., 2019). In a similar fashion, unsupervised RL (URL) agents aim to learn about the environment without the need for external reward functions, driven by intrinsic motivation (Pathak et al., 2017; Burda et al., 2019a; Bellemare et al., 2016). Any learned models can then be adapted to downstream tasks, aiming to reduce the required amount of interactions with the environment. Recently, the Unsupervised RL Benchmark (URLB) (Laskin et al., 2021) established a common protocol to compare self-supervised algorithms across several domains and tasks from the DMC Suite (Tassa et al., 2018). In the benchmark, an agent is allowed a task-agnostic pre-training stage, where it can interact with the environment in an unsupervised manner, followed by a fine-tuning stage where, given a limited budget of interactions with the environment, the agent should quickly adapt for a specific task. However, the results obtained by Laskin et al. (2021) suggest that current URL approaches may be insufficient to perform well on the benchmark, especially when the inputs of the agent are pixel-based images. World models have proven highly effective for solving RL tasks from vision both in simulation (Hafner et al., 2021; 2019a) and in robotics (Wu et al., 2022), and they are generally data-efficient as they enable learning behavior in imagination (Sutton, 1991). Inspired by previous work on exploration (Sekar et al., 2020), we hypothesize this feature could be key in the unsupervised RL setting, as a pre-trained world model can leverage previous experience to learn behavior for new tasks in imagination, and in our work, we study how to best exploit this feature. We adopt the URLB setup to perform a large-scale study, involving several unsupervised RL methods for pre-training model-based agents, different fine-tuning strategies, and a new improved algorithm for efficiently planning with world models. The resulting approach, which combines the findings of our study, strongly improves performance on the URL benchmark from pixels, nearly achieving the asymptotic performance of supervised RL agents, trained with 20x more task-specific data, and bridging the gap with low-dimensional state inputs (Laskin et al., 2021). Contributions. This work does not propose a novel complex method. Rather, we study the interplay of various existing components and propose a novel final solution that outperforms existing state of the art on URLB by a staggering margin. Specifically: • we demonstrate that unsupervised RL combined with world models can be an effective pre-training strategy to enable data-efficient visual control (Section 3.1), • we study the interplays between the agent’s pre-trained components that improve sample efficiency during fine-tuning (Section 3.2), • we propose a novel hybrid planner we call Dyna-MPC, which allows us to effectively combine behaviors learned in imagination with planning (Section 3.3), • combining our findings into one approach, we outperform previous approaches on URLB from pixels, nearly solving the benchmark (Section 4.1), • we show the approach is resilient to environment perturbations, evaluating it on the Real World RL benchmark (Dulac-Arnold et al., 2020) (Section 4.2), • we present an extensive analysis of the pre-trained agents, aimed at understanding in-depth the current findings and limitations (Section 4.3). An extensive empirical evaluation, supported by more than 2k experiments, among main results, analysis and ablations, was used to carefully design our method. We hope that our large-scale evaluation will inform future research towards developing and deploying pre-trained agents that can be adapted with considerably less data to more complex/realistic tasks, as it has happened with unsupervised pre-trained models for vision (Parisi et al., 2022) and language (Ahn et al., 2022). 1 2 PRELIMINARIES Reinforcement learning. The RL setting can be formalized as a Markov Decision Process (MDP), denoted with the tuple {S,A, T,R, γ}, where S is the set of states, A is the set of actions, T is the state transition dynamics, R is the reward function, and γ is a discount factor. The objective of an RL agent is to maximize the expected discounted sum of rewards over time for a given task, also called return, and indicated as Gt = ∑T k=t+1 γ (k−t−1)rk. In continuous-action settings, you can learn an actor, i.e. a model predicting the action to take from a certain state, and a critic, i.e. a model that estimates the expected value of the actor’s actions over time. Actor-critic algorithms can be combined with the expressiveness of neural network models to solve complex continuous control tasks (Haarnoja et al., 2018; Lillicrap et al., 2016; Schulman et al., 2017). 1The PyTorch code for the experiments will be open-sourced upon publication. Unsupervised RL. In this work, we investigate the problem of fast adaptation for a downstream task, after a phase of unsupervised training and interaction with the environment. Our training routine, based on the setup of URLB (Laskin et al., 2021), is made of two phases: a pre-training (PT) phase, where the agent can interact with a task-agnostic version of the environment for up to 2M frames, and a fine-tuning phase (FT), where the agent is given a task to solve and a limited budget of 100k frames. During the PT phase, rewards are removed so that sensible information about the environment should be obtained by exploring the domain-dependent dynamics, which is expected to remain similar or unchanged in the downstream tasks. During FT, the agent receives task-specific rewards when interacting with the environment. As the agent has no prior knowledge of the task, it should both understand the task and solve it efficiently, in a limited interaction budget. In this setting, the performance of unsupervised model-free RL (Yarats et al., 2022) were shown to be insufficient as reported in (Laskin et al., 2021). We believe the key reason for this is that model-free RL algorithms can exploit only a little part of the information obtained with self-supervised interaction, as they rely uniquely on actor and critic’s predictions. World models. In this work, we ground upon the DreamerV2 agent (Hafner et al., 2021), which learns a world model (Ha & Schmidhuber, 2018; Hafner et al., 2019b) predicting the outcomes of actions in the environment. The dynamics is captured into a latent space Z , providing a compact representation of the high-dimensional inputs. The world model consists of the following components: Encoder: et = fϕ(st), Decoder: pϕ(st|zt), Dynamics: pϕ(zt|zt−1, at−1), Posterior: qϕ(zt|zt−1, at−1, et). The model states zt have both a deterministic component, modeled using the recurrent state of a GRU (Chung et al., 2014), and a (discrete) stochastic component. The encoder and decoder are convolutional neural networks (CNNs) and the remaining components are multi-layer perceptrons (MLPs). The world model is trained end-to-end by optimizing an evidence lower bound (ELBO) on the log-likelihood of the data collected in the environment (Hafner et al., 2019b;a). For the encoder and the decoder networks, we used the same architecture as in Hafner et al. (2021). For control, the agent learns latent actor πθ(at|zt) and critic vψ(zt) networks. Both components are trained online within the world model, by imagining the model state outcomes of the actions produced by the actor, using the model dynamics. Rewards for imagined trajectories are provided by a reward predictor, pϕ(rt|zt) trained to predict environment rewards, and they are combined with the critic predictions to produce a GAE-λ estimate of the returns (Schulman et al., 2016). The actor maximizes estimates of returns, backpropagating gradients through the model dynamics. The hyperparameters for the agent, which we keep fixed across all domains/tasks, can be found in Appendix H. 3 UNSUPERVISED MODEL-BASED PRE-TRAINING FOR DATA-EFFICIENT CONTROL FROM PIXELS To best exploit self-supervised pre-training for data-efficient adaptation, it is important that the agent: (i) meaningfully interacts with the environment during the PT phase, to discover useful transitions; (ii) successfully reuses the modules learned during PT for fast adaptation; and (iii) efficiently employs the FT phase to quickly understand and master the downstream task. In this section, we use an experiment-driven approach to find which methods or components are best at tackling these challenges. Experimental procedure. We employ the URL benchmark that consists of three control domains, Walker, Quadruped and Jaco, and twelve tasks, four per domain. To evaluate the agents, we take snapshots of the agent at different times during training, i.e. 100k, 500k, 1M, and 2M frames, and finetune the agent for 100k frames. In all bar plots, we show average normalized returns on downstream tasks with error bars showing the standard deviation. To normalize results in a comparable way for all tasks, we train a fully-supervised agent with 2M frames per task. We use the mean performance of this agent, which we refer to as "oracle", as the reference scores to normalize our results in the plots (details in Appendix A). For all experiments, results are presented with at least three random seeds. 3.1 UNSUPERVISED PRE-TRAINING In the PT stage, unsupervised RL can be used to explore the environment, collecting the data to train the components of the agent. The resulting networks are then used to initialize respective components in the agent deployed for the downstream task, aiming to reduce sample complexity during FT. The first question we address is thus "What kinds of agents work best with unsupervised pre-training?". Unsupervised RL methods can be grouped into three categories (Laskin et al., 2021): knowledgebased, which aim to increase the agent’s knowledge by maximizing error prediction (Pathak et al., 2017; 2019; Burda et al., 2019b), data-based, which aim to achieve diversity of data (Yarats et al., 2021; Liu & Abbeel, 2021b) and competence-based, which aim to learn diverse skills (Liu & Abbeel, 2021a; Eysenbach et al., 2019). In Figure 2a we report the results from Laskin et al. (2021), showing that none of these approaches is particularly effective on URLB when combined with the DrQ model-free agent (Yarats et al., 2022), state-of-the-art in RL from pixels, where the data collected with unsupervised RL is used to pre-train the agent’s actor, critic, and encoder. To demonstrate that world models can be used to effectively exploit unsupervised RL data collection for fast adaptation, we study multiple approaches and use them to pre-train the Dreamer’s world model and latent actor. As knowledge-based methods we employ ICM (Pathak et al., 2017), LBS (Mazzaglia et al., 2021b), Plan2Explore (P2E; (Sekar et al., 2020)), and RND (Burda et al., 2019b). As a data-based approach, we choose APT (Liu & Abbeel, 2021b), and as competence-based approaches, we adopt DIAYN (Eysenbach et al., 2019) and APS (Liu & Abbeel, 2021a). Finally, we also test random actions, as a naive maximum entropy baseline (Haarnoja et al., 2018). Details on these methods and how we combined them with the Dreamer algorithm are discussed in Appendix B. Aggregating results per category, in Figure 2b, we show that by leveraging a pre-trained world model the overall performance improves over time for all categories, as opposed to the model-free results, where only knowledge-based approaches slightly improve. In particular, data-based and knowledge-based methods are more effective in the Walker and Quadruped domains, and random actions and competence-based are more effective in the Jaco domain. Detailed results for each method are available in Appendix E. 3.2 FINETUNING PRE-TRAINED AGENTS Some of the components learned during the PT phase, such as the world model, can be reused for fast adaptation during FT. However, as the reward is changing from pseudo-reward to task reward when changing from the PT to the FT phase, it is not clear if pre-training of the actor and critic can help the downstream task. To shed light on this, we seek to answer: "Which pre-trained components are useful for downstream tasks?". Here, we test different fine-tuning configurations, where we copy the weights of some of the PT components into the agent to fine-tune for the downstream task. We run the tests for the several unsupervised RL methods combined with Dreamer that we presented in Section 3.1 and show aggregated results in Figure 3 (detailed results per each method in Appendix E). Overall, fine-tuning the PT world model provides the most significant boost in performance, strengthening the hypothesis that world models are very effective with unsupervised RL. Fine-tuning the actor improves performance slightly in Walker and remarkably in Quadruped, but is harmful in the Jaco domain. An intuitive explanation is that in the Quadruped and Walker moving tasks, the exploratory behaviors help discovering reward faster. Instead, in the Jaco goal-reaching tasks, the agent needs to reach a certain target with sparse rewards. If the PT actor is initialized to move far from the target, the agent might struggle to find rewards in the small FT budget. Finally, using a PT critic is systematically worse. This can be explained by the discrepancy between intrinsic rewards and task rewards. 3.3 LEARNING AND PLANNING IN IMAGINATION Knowing a model of the environment, traditional model-based control approaches, e.g. model predictive control (MPC) (Williams et al., 2015; Chua et al., 2018; Richards, 2005), can be used to plan the agent’s action. Nonetheless, using actor-critic methods has several advantages, such as amortizing the cost of planning by caching previously computed (sub)optimal actions and computing long-term returns from a certain state, without having to predict outcomes that are far in the future. More recent hybrid strategies, such as LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022), allow combining trajectories sampled from the actor with trajectories sampled from a distribution over actions that is iteratively improved. The model and the critic are used to evaluate the trajectories, Algorithm 1 Dyna-MPC Require: Actor θ, Critic ψ, World Model ϕ 1: µ, σ: initial parameters for sampling actions 2: N,Nπ: num trajectories, num policy trajectories 3: zt, H: current model state, planning horizon 4: for each iteration j = 1..J do 5: Sample N trajectories of length H from N (µ, σ2I), starting from zt 6: Sample Nπ trajectories of length H using the actor πθ, starting from zt 7: Estimate future states, using the model, and returns, using reward and critic predictions 8: Update µ and σ using MPPI (Williams et al., 2015) 9: end for 10: return at ∼ N (µt, σ2t I) improve them, and eventually select the most promising actions, i.e. planning. In this section, we answer the question: Can we accelerate downstream task adaptation by leveraging planning? Dyna-MPC. As we pre-train a world model, we could exploit planning in latent space to adapt with limited additional environment interaction. One problem with the above strategies is that they are based upon learning off-policy actor and critic, which in our context would prevent us from exploiting the PT model to learn the actor and critic in imagination. In order to enable hybrid planning with the behavior learned in imagination (Hafner et al., 2019a), we develop a modification of these approaches, which we call Dyna-MPC, that combines the actor and critic learned in imagination with MPPI (Williams et al., 2015) for planning. As detailed in Algorithm 1, at each time step, we imagine a set of latent trajectories using the model, by sampling actions from a time-dependent multivariate gaussian and from the actor policy, trained with Dreamer in imagination. Returns for MPPI are estimated using reward predictions by the model and the critic. MPPI is used to update the parameters of the multivariate gaussian for J iterations. Details on how returns are estimated and the MPPI updates work are given in Appendix C. One significant difference with previous approaches is that the policy in Dyna-MPC is learned on-policy in imagination, thus no correction for learning off-policy is required (Sikchi et al., 2020). Given the insights from the previous section, we use the world models and actors pre-trained with all the different unsupervised strategies we considered (see Section 3.1)2 and test their FT performance with and without planning with Dyna-MPC. Aggregated scores are reported in Figure 4, and detailed results for each method are available in Appendix E. We observe that adopting Dyna-MPC is always beneficial, as it improves the average performance and reduces variance in all domains. 3.4 OUR METHOD: COMBINING THE FINDINGS TOGETHER In the large-scale study, we explored several design choices to establish the most adequate approach to tackle the URL benchmark, aiming to provide a general recipe for data-efficient adaptation thanks to unsupervised RL. Our approach combines the main findings we presented in the previous sections: 1. learning a model-based agent with data collected using unsupervised RL (Figure 2); 2. fine-tuning the PT world model (always) and the pre-trained actor (where beneficial), while learning the critic from scratch (Figure 3); 3. adopting a hybrid planner, as the proposed Dyna-MPC, to leverage both learning and planning in imagination (Figure 4). An overview of the method is illustrated in Figure 1 and the algorithm is presented in Appendix D. We believe the above recipe could be generally applied to unsupervised settings, also outside of URLB, with the precaution that one should carefully make two decisions: (a) whether fine-tuning the PT actor is meaningful for the downstream task or it’s better to re-learn it from scratch, (b) what is the best URL strategy to collect data. Both decisions strongly depend on the target domain/task and so it is difficult to assess their implications beforehand. However, adopting unsupervised strategies that specifically focus on interacting with interesting elements of the environment, e.g. objects, or that quickly explore large areas of the environment at the beginning of fine-tuning may help exploring and revisiting crucial states of the environment more easily (Parisi et al., 2021). For URLB, we already established (a) that the PT actor is effective in Walker and Quadruped tasks, but it is better re-learn the actor from scratch in Jaco, in Section 3.2. To decide which URL strategy to use (b) we present a detailed comparison of the performance of our approach using different exploration strategies. The results in Figure 5 show that the agent using LBS during pre-training performs overall best, as it has the highest interquartile mean (IQM) and mean scores, and the lowest optimality gap. Thus, in the evaluation section, we present Ours (LBS) as our approach. 4 EVALUATION AND ANALYSIS 4.1 UNSUPERVISED REINFORCEMENT LEARNING BENCHMARK In Section 3, we presented our approach, which combines the findings from our empirical large-scale study on URLB. In Figure 6, we compare the results from the original URLB paper with our approach. The performance of our method is superior in all domains. The second strongest method (DrQ with Disagreement) approaches an overall performance of 40% of the respective supervised baseline performance, while our method recovers more than 90% of its supervised counterpart. 4.2 REAL-WORLD REINFORCEMENT LEARNING BENCHMARK Algorithms developed in simulation struggle to transfer to real-world systems due to a series of implicit assumptions that are rarely satisfied in real environments, e.g. URLB assumes the dynamics between PT and FT stay the same. The RWRL benchmark (Dulac-Arnold et al., 2020) considers several challenges that are common in real-world systems and implements them on top of DMC tasks. We employ vision-based variants of the Walker Walk and Quadruped Walk tasks from the RWRL benchmark. These tasks introduce system delays, stochasticity, and perturbations of the robot’s model and sensors, which are applied with three degrees of intensity to the original environment, i.e. ‘easy’, ‘medium’, and ‘hard’ (details in Appendix F). We seek to answer whether in perturbed settings: • does unsupervised PT enable faster adaptation? • does unsupervised RL provide an advantage over random exploration? • does hybrid planning improve performance, as in URLB? In Figure 7, we present the results of our method, using LBS during PT, with and without planning with Dyna-MPC for FT, and compare to random exploration and training from scratch for 100k, 1M, and 2M frames. Crucially, the PT models are trained in the vanilla task-agnostic version of the environments from the DMC Suite, so that the results highlight the extent to which models trained in ideal conditions generalize to perturbed settings when fine-tuned in a low-data regime. 2We exceptionally do not use the pre-trained actor in the Jaco tasks, as this was shown to lead to better performance in Section 3.2 (Figure 3). Overall, we found that fine-tuning PT models offer an advantage over training from scratch for 100k frames, despite all the variations in the environment. Furthermore, on the Quadruped Easy and Medium settings, our method performs better than Dreamer@1M and not far from Dreamer@2M while using 10x and 20x less task-specific data, respectively. Our method also performs close to Dreamer@1M/2M in the Walker Easy task. Unsupervised RL for data collection (Ours) outperforms random actions in the ‘easy’ and ‘medium’ settings, showing that a better PT model yields higher FT performance, even when the dynamics of the downstream task is affected by misspecifications and noisy factors. Finally, in contrast with the findings on URLB, adopting the hybrid planner is not generally beneficial. We believe this is because the model’s predictions are less certain and precise in this setting and thus cannot inform the short-term planner accurately. 4.3 EXTENDED ANALYSIS To better analyze the learned components, we conducted a range of additional experiments. For conciseness, detailed descriptions of the experimental settings are deferred to Appendix G and we briefly summarize the takeaways in this section. Learning rewards online. We verify whether having to discover and learn the reward function during FT impacts performance. In Figure 8, we compare against agents that (violating the URLB settings) know the task in advance and can pre-train a reward predictor during the PT stage. We see that learning the reward predictor does not affect performance significantly for dense-reward tasks, such as the Walker and Quadruped tasks. However, in sparser reward tasks, i.e. the Jaco ones, knowing reward information in advance provides an advantage. Efficient strategies to find sparse rewards efficiently represent a challenge for future research. More details in Appendix G.1. Zero-shot adaptation. Knowing a reward predictor from PT, it could be possible to perform zero-shot control with MPC methods if the model and the reward function allow it. In Figure 9, we show that despite the zero-shot MPC (ZS) offers an advantage over Dreamer@100k, the FT phase is crucial to deliver high performance on the downstream tasks, as the agent uses this phase to collect missing information about the environment and the task. Further details in Appendix G.2. Latent dynamics discrepancy (LDD). We propose a novel metric, Latent Dynamics Discrepancy, which evaluates the distance between the latent predictions of the PT model and the same model after FT on a task. In Figure 10, we show the correlation between our metric and the performance ratio between using the PT model and the FT model for planning (see Appendix G.3 for a detailed explanation). We observed a strong negative Pearson correlation (−0.62, p-value: 0.03), highlighting that major updates in the model dynamics during FT played an important role in improving performance. Unsupervised rewards and performance. We analyze the correlation between the normalized performance of different agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent in Table 1. In particular, the correlation for LBS, which overall performs best in URLB, has a statistical significance, as its p-value is < 0.05. We believe this correlation might be one of the causes of LBS outstanding performance. Further insights are provided in Appendix G.4. 5 RELATED WORK Model-based control. Dynamics models combined with powerful search methods have led to impressive results on a wide variety of tasks such as Atari (Schrittwieser et al., 2020) and continuous control (Hafner et al., 2019a; Janner et al., 2019; Sikchi et al., 2021; Lowrey et al., 2018). LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022) combine temporal difference learning and MPC. The model proposed with TD-MPC is task-oriented and thus requires a task to accelerate learning. In our work, we focus on unsupervised model learning, grounding on the DreamerV2 model (Hafner et al., 2021), whose supervision comes from predicting the environment’s observations. Methods that use no reconstruction could generalize better to visual differences (Mazzaglia et al., 2021a; Ma et al., 2020) but they lose in explainability, as they cannot decode imagined trajectories. Unsupervised RL. Prior to our work, the large-scale study of curiosity (Burda et al., 2018) provided an insightful analysis of the performance of knowledge-based methods in the reward-free setting. In our work, we leverage the URLB setting, to provide an analysis of a combination of model-based control techniques with unsupervised RL. This allowed us to formulate a strategy to adapt pre-trained models to visual control tasks in a data-efficient manner. Closely, Sekar et al. (2020) combines adapts the Disagreement (Pathak et al., 2019) to work with Dreamer (Hafner et al., 2019a). In our work, in addition to analyzing a wider choice of unsupervised RL strategies, we show how to better exploit the agent PT components for adaptation, and we propose a hybrid planner to improve data-efficiency. Transfer learning. In the field of transfer learning, fine-tuning is the most used approach. However, fine-tuning all the pre-trained agent components may not be the most effective strategy. In transfer learning for RL, they have studied this problem, mainly with the objective of transferring from one environment to another (Farebrother et al., 2018; Sasso et al., 2022; van Driessel & Francois-Lavet, 2021). Instead, we analyze which agent’s components should be transferred from the unsupervised PT stage to the supervised FT stage when the environment’s dynamics is assumed to stay similar or be the same. Another stream of work has studied successor representations, to enable a better transfer of the agent’s actor-critic (Hansen et al., 2020; Barreto et al., 2016). 6 CONCLUSION In order to accelerate the development and deployment of learning agents for real-world tasks, it is crucial that the employed algorithms can adapt in a data-efficient way for multiple tasks. Our study provides an empirical analysis of several design choices, which allowed us to obtain near-optimal performance in URLB and that showed robustness to perturbations in the environment, on the RWRL benchmark. We also analyzed several aspects of the learned models, to understand what could be improved further in the future to ease the adaptation process. Limitations. In the Jaco reaching tasks, we found that a bad initialization of the pre-trained actor can actually harm the agent’s performance. While competence-based approaches should address this limitation, by learning a variety of skill behaviors, their performance on the other domains has been subpar. Future work should aim to find a more general approach to pre-train behavior for fast adaptation or improve the exploration capabilities of competence-based approaches. Another issue we encountered, on the RWRL benchmark, is that if the environment introduces too intense perturbations during adaptation, relying on the predictions of the adopted world model becomes problematic, to the extent that exploiting a planner is not useful anymore. Developing more resilient models that can be trained in an unsupervised fashion and used for data-efficient planning, even in presence of complex perturbations, will be the focus of future studies. Reproducibility statement We reported in the main text (Algorithm 1) the pseudo-code for DynaMPC and in Appendix D the pseudo-code for our end-to-end approach. We also provide instructions on how we implemented our methods (Appendix B) and all the model and training hyperparameters to implement and reproduce the results (Table 4). We will release our code and scripts. A NORMALIZATION SCORES In Table 2, we report the mean scores for the URLB Expert, used to normalize the scores in the URLB paper, and for Dreamer@2M, which we use to normalize returns of our methods, where both supervised baselines have been trained individually on each of the 12 tasks from URLB for 2M frames. We additionally report mean and standard deviations for the best performing unsupervised baseline from URLB. which is Disagreement (Pathak et al., 2019), and our method (using LBS for data collection). We notice that our scores approach the Dreamer@2M’s scores in several tasks, eventually outperforming them in a few tasks (e.g. Walker Flip, Quadruped Jump). We believe this merit is due both to the exploration pre-training, which may have found more rewarding trajectories than greedy supervised RL optimization and of the improved Dyna-MPC planning strategy. B INTEGRATING UNSUPERVISED RL STRATEGIES We summarize here the unsupervised RL approaches tested and how we integrated them with the Dreamer algorithm for exploration. For all methods, rewards have been normalized during training using an exponential moving average with momentum 0.95, with the exceptions of RND, which follows its original reward normalization (Burda et al., 2019b), and APS, whose rewards are not normalized because they are used to regress the skill that is closer to the downstream task during FT. ICM. The Intrinsic Curiosity Module (ICM; Pathak et al. (2017)) defines intrinsic rewards as the error between states projected in a feature space and a feature dynamics model’s predictions. We use the Dreamer agent encoder et = fϕ(st) to obtain features and train a forward dynamics model g(et|et−1, at−1) to compute rewards as: rt ICM ∝ ∥g(et|et−1, at−1)− et∥2. As the rewards for ICM require environment states (going through the encoder to compute prediction error), we train a reward predictor to allow estimating rewards in imagination. Plan2Explore. The Plan2Explore algorithm (Sekar et al., 2020) is an adaptation of the Disagreement algorithm (Pathak et al., 2019) for latent dynamics models. An ensemble of forward dynamics models is trained to predict the features embedding et = fϕ(st), given the previous latent state and actions, i.e. g(et|zt−1, at−1, wk), where wk are the parameters of the k-th predictor. Intrinsic rewards are defined as the variance of the ensemble predictions: rt P2E ∝ Var({g(et|zt−1, at−1, wk)|k ∈ [1, ...,K]}). Plan2Explore requires only latent states and actions, thus it can be computed directly in imagination. We used an ensemble of 5 models. RND. Random Network Distillation (RND; Burda et al. (2019b)) learns to predict the output of a randomly initialized network n(st) that projects the states into a more compact random feature space. As the random network is not updated during training, the prediction error should diminish for already visited states. The intrinsic reward here is defined as: rt RND ∝ ∥g(st)− n(st)∥2 As the rewards for RND requires environment states (to encode with the random network), we train a reward predictor to allow estimating rewards in imagination. LBS. In Latent Bayesian Surprise (LBS; Mazzaglia et al. (2021b)), they use the KL divergence between the posterior and the prior of a latent dynamics model as a proxy for the information gained with respect to the latent state variable, by observing new states. Rewards are computed as: rt LBS ∝ DKL[q(zt|zt−1, at−1, et)∥p(zt|zt−1, at−1)] As the rewards for LBS requires environment states (to compute the posterior distribution), we train a reward predictor to allow estimating rewards in imagination. APT. Active Pre-training (APT; Liu & Abbeel (2021b)) uses a particle-based estimator based on the K nearest-neighbors algorithm (Singh et al., 2003) to estimate entropy for a given state. We implement APT on top of the deterministic component of the latent states z̄t, providing rewards as: rt APT ∝ k∑ i log ∥z̄t − z̄it∥2, where k are the nearest-neighbor states in latent space. As APT requires only latent states, it can be computed directly in imagination. We used k = 12 nearest neighbors. DIAYN. Diversity is All you need (DIAYN; Eysenbach et al. (2019)) maximizes the mutual information between the states and latent skills w. We implement DIAYN on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(wt)−H(wt|zt). The entropy H(wt) is kept maximal by sampling wt ∼ Unif(wt) from a discrete uniform prior distribution, while H(wt|zt) is estimated learning a discriminator q(wt|zt). We compute intrinsic rewards as: rt DIAYN ∝ log q(wt|zt) Additionally, DIAYN maximizes the entropy of the actor, so we add an entropy maximization term to Dreamer’s objective (Haarnoja et al., 2018). As DIAYN requires model states and skills sampled from a uniform distribution to compute rewards, we can directly compute them in imagination. For FT, the skill adapted is the one with the highest expected rewards, considering the states and rewards obtained in the initial episodes. APS. Active Pre-training with Successor features (APS; Liu & Abbeel (2021a)) maximizes the mutual information between the states and latent skills w. We implement APS on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(zt)−H(zt|wt). The entropy term H(zt) is estimated using a particle-based estimator on top of the deterministic component of the latent states z̄t, as for APT, while the term H(zt|wt) is estimated learning a discriminator q(zt|wt). The intrinsic rewards for APS can be written as: rt APS ∝ rtAPT + log q(wt|zt) As APS requires model states and uniformly sampled skills to compute rewards, we can directly compute them in imagination. For FT, the skill to adapt is selected using linear regression over the states and rewards obtained in the initial episodes (Liu & Abbeel, 2021a). C DYNA-MPC To further improve data efficiency, we chose to use an hybrid planner that combines reinforcement learning and MPC (Hansen et al., 2022; Sikchi et al., 2020; Lowrey et al., 2018). Previous works leveraged model-free off-policy algorithms (Hansen et al., 2022; Sikchi et al., 2020) to learn the actor and critic in a more computationally efficient manner. The policy used to act on the environment combines action samples from the actor network with MPC, while the critic and the actor are learned "offline" from previously collected data. This has several benefits but also leads to an issue referred to as “actor divergence" (Sikchi et al., 2020), which consists of the policy used for data collection being different from the policy that is used to learn the critic. In our study, we found that using the PT world model to learn the actor and the critic is crucial to improve data-efficiency during FT (see Figure 3). Thus, we discard the option of learning the actor and critic with off-policy deep RL. Instead, we design a new hybrid planner, which we call Dyna-MPC, that learns actor and critic functions in the model imagination (Sutton, 1991), using the Dreamer algorithm (Hafner et al., 2019a), and then combines their predictions with MPPI (Williams et al., 2015) for acting on the environment. By doing so we mitigate the "actor divergence" issue as actor and critic are learned on-policy on the trajectories generated with the model. The critic is learned in the model’s imagination, computing the expected value of the actor’s actions using GAE-λ estimates of the returns (Schulman et al., 2016; Hafner et al., 2019a): V λt = rt + γt { (1− λ)vψ(zt+1) + λV λt+1 if t < H, vψ(zH) if t = H, (1) where rt is the reward for state zt, yielded by the reward predictor of the world model, and H is the imagination horizon. When computing returns for MPPI we use the same return estimates. At each time step, we use MPPI to select the best action. MPPI iteratively fits the parameters of a time-dependent multivariate Gaussian distribution with diagonal covariance, updating mean and standard deviation parameters using an importance weighted average of the top-k trajectories with the highest estimated returns. At every step, N trajectories Γi = {a0,i, a1,i, ..., aH,i} of length H are obtained sampling actions from the distributions at ∼ N (µt, σ2t I) and Nπ trajectories are sampled from the actor network at ∼ πθ(at|zt) and their outcomes are predicted using the model. At each MPPI iteration, the distribution parameters are updated as follows: µ = ∑k i=1 ΩiΓ ⋆ i∑N i=1 Ωi , σ = max √√√√∑Ni=1 Ωi(Γ⋆i − µ)2∑N i=1 Ωi , ϵ , (2) where Ωi = exp(τV λi ), τ is a temperature parameter, ⋆ indicates the trajectory is in the top-k, and ϵ is a clipping factor to avoid too small standard deviations (Hansen et al., 2022). To reduce the number of iterations required for convergence, we reuse the 1-step shifted mean obtained at the previous timestep (Argenson & Dulac-Arnold, 2020). D ALGORITHM Algorithm 2 Unsupervised Model-based Pre-Training for Data-efficient Control from Pixels Require: Actor θ, Critic ψ, World Model ϕ 1: Intrinsic reward rint, extrinsic reward rext 2: Environment, M , downstream tasks Tk, k ∈ [1, . . . ,M ] 3: Pre-train frames NPT, fine-tune frames NFT, environment frames/update τ 4: Initial model state z0, hybrid planner Dyna-MPC, replay buffers DPT, DFT 5: 6: // Pre-training 7: for t = 0, . . . , NPT do 8: Draw action from the actor, at ∼ πθ(at|zt) 9: Apply action to the environment, st+1 ∼ P (·|st,at) 10: Add transition to replay buffer, DPT ← DPT ∪ (st,at, st+1) 11: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 12: if t mod τ = 0 then 13: Update world model parameters ϕ on the data from the replay buffer DPT 14: Update actor-critic parameters {θ, ψ} in imagination, maximizing rint 15: end if 16: end for 17: Output pre-trained parameters {ψPT, θPT, ϕPT} 18: 19: // Fine-tuning 20: for Tk ∈ [T1, . . . , TM ] do 21: Initialize fine-tuning world-model with ϕPT 22: (Optional) Initialize fine-tuning actor with θPT 23: for t = 0, . . . , NFT do 24: Draw action from the actor, at ∼ πθ(at|zt) 25: Use the planner for selecting best action, at ∼ Dyna-MPC(zt) 26: Apply action to the environment, st+1, rextt ∼ P (·|st,at) 27: Add transition to replay buffer, DFT ← DFT ∪ (st,at, rextt , st+1) 28: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 29: if t mod τ = 0 then 30: Update world model parameters ϕ on the data from the replay buffer DFT 31: Update actor-critic parameters {θ, ψ} in imagination, maximizing rext 32: end if 33: end for 34: Evaluate performance on Tk 35: end for E ADDITIONAL RESULTS We present complete results, for each unsupervised RL method, for the large-scale study experiments presented in Section 3. Can a pre-training stage longer than 2M frames be beneficial? In Figure 14, we report FT results with our full method, every 1M frames up to 5M PT frames. The aggregated results show that, adopting our method, longer PT can increase performance further, especially until 4M steps. The performance in all domains keeps increasing or remains steady until 5M steps, with two exceptional cases, Walker for Plan2Explore and Jaco for APS, where performance drops between 4M and 5M steps. For these experiments, we kept the size of the model and all the hyperparameters unvaried with respect to the 2M PT frames experiments but we increased the replay buffer maximum size to 5M frames. Increasing model capacity, and adopting additional precautions, such as annealing learning rate, it is possible that the agent could benefit even more from longer pre-training and we aim to analyse this more in details for future work. F RWRL SETTINGS We take the Quadruped and Walker tasks from the RWRL benchmark and replace the low-dimensional sensor inputs with RGB camera inputs. While this removes some of the perturbations planned in the benchmark (Dulac-Arnold et al., 2020), such as noise in the sensors, it introduces the difficulty of a different dynamics in pixel space (due to the other perturbations), compared to the one observed during pre-training in the vanilla simulation environment. G EXTENDED ANALYSIS We note that, to run the experiments faster, we did not use Dyna-MPC for the extended analysis. Furthermore, the Jaco tasks used slightly differ from the original ones in URLB, only in that the target to reach cannot move. This allows consistency of the reward function between PT and FT, so that a reward predictor can be trained on ‘reward-labelled’ PT data. However, because of this change, the performance in Jaco may differ from the other main results (particularly in Figure 8 and Figure 9). G.1 LEARNING REWARDS ONLINE In Figure 8 of the main text, we measure the gap in performance between pre-trained agents that have no knowledge of the reward function at the beginning of fine-tuning and agents whose reward predictor is initialized from a reward predictor learned on top of the unsupervised pre-training data (violating the URLB settings). Crucially, the agent during unsupervised PT can learn the reward predictor without affecting neither the model learning or the exploration process. To not affect the model, gradients are stopped between the reward predictor and the rest of the world model. To not affect exploration, the rewards used to train the agent’s actor and critic remain the intrinsic rewards, for exploration. G.2 ZERO-SHOT ADAPTATION Using agents that have access to a PT reward predictor, we explore the idea of zero-shot adaptation using MPC, which is trying to solve the URLB tasks using only planning and the pre-trained world model and reward predictor. In order to obtain good performance, this assumes that the model correctly learned the dynamics of the environment and explored rewarding transitions that are relevant to the downstream task, during pre-training. In Figure 9 of the main text, we compare the results of performing MPC in a zero-shot setting (ZS) with the performance of an MPC agent that is allowed 100k frames for fine-tuning (FT). As for the MPC method, we employ MPPI (Williams et al., 2015). Because these experiments are particularly expensive to run, we just them on the agents trained with the Plan2Explore URL approach. We observe that the performance of zero-shot MPC is generally weak. While it overall performs better than the non-pre-trained model, simply applying MPC leveraging the pre-trained world model and reward predictor trained on the pre-training stage data is not sufficient to guarantee satisfactory performance. The fact that exploiting the fine-tuning stage using the same MPC approach generally boosts performance demonstrates that the model has a major benefit from the FT stage. Still, the performance of MPC generally lacks behind the actor-critic performance, suggesting that, especially in a higher-dimensional action space such as the Quadruped one, amortizing the cost of planning with actor-critic seems crucial to achieve higher performance. G.3 LATENT DYNAMICS DISCREPANCY Model misspecification is a useful measure to assess the uncertainty or inaccuracy of the model dynamics. It is computed as the difference between the dynamics predictions and the real environment dynamics. The metric helps build robust RL strategies, that take the dynamics uncertainty into account while searching for the optimal behavior (Talvitie, 2018). However, with pixel-based inputs the dynamics of the environment are observed through high-dimensional images. And this in-turn could hurt the metric evaluation, since the distances in pixel space can be misleading. In our approach, we use a model-based RL agent that learns the dynamics model in a compact latent space Z . Our novel metric, Latent Dynamics Discrepancy (LDD), quantifies the “misspecification" of the learned latent dynamics accordingly. The metric quantifies the distance between the predictions of the pre-trained model and the same model after fine-tuning on a downstream task. However, as the decoder of the world model gets updated during fine-tuning, the latent space mapping between model states z and environment states s might drift. For this reason, we freeze the agent’s decoder weights, so that the model can only improve the posterior and the dynamics. This ensures that the mapping Z −→ S remains unchanged and allows to compare the dynamics model after fine-tuning with the one before fine-tuning. In order to measure the distance between the distribution output by the dynamics network, we chose the symmetrical Jensen-Shannon divergence: LDD = E(zt,at) [ DJS[pFT(zt+1|zt, at)∥pPT(zt+1|zt, at)] ] , (3) where the expectation is taken over the previous model states zt sampled from the fine-tuned posterior qFT(zt), actions at−1 sampled from an oracle actor π∗(at|zt), so that we evaluate the metric on optimal trajectories, whose environment’s state distribution corresponds to the stationary distribution induced by the actor st ∼ dπ ∗ (st). We used 30 trajectories per task in our evaluation. We observe in our experiments that there exists a correlation between the metric and the performance ratio between a zero-shot model and a fine-tuned model (see Figure 10 in the main paper). The key observation is that major updates in the model dynamics during fine-tuning phase played an important role in improving the agent’s performance, compared to the pre-trained model and zero-shot performance. Future research may attempt to reduce such dependency by either improving the model learning process, so that the pre-trained dynamics could have greater accuracy, or the data collection process, proposing URL methods that directly aid to reduce such uncertainty. G.4 UNSUPERVISED REWARDS AND PERFORMANCE We further analyzed the correlation between the normalized performance of the different exploration agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent. A strong negative correlation between the two factors should indicate that the agent is more interested in seeing the optimal trajectories when its performance is low on the task. We observe that there is negative correlation between Plan2Explore (P2E), ICM, LBS’s performance and their intrinsic rewards, while we found∼0 correlation for RND (see Table 1 in the main text). Out of the methods tested, LBS significantly demonstrated the correlation, as its p-value is < 0.05. This is likely one of the key factors for the high performance of the agent using LBS on the benchmark. One possible explanation is that LBS searches for transitions of the environment that are difficult to predict for the dynamics, so the model likely learns those transitions more accurately, facilitating planning during the fine-tuning stage. Another potential explanation is that, given the high correlation between intrinsic and extrinsic rewards, the actor initialized by LBS performs better at the beginning of FT, speeding up adaptation. H HYPERPARAMETERS Most of the hyperparameters we used for world-model training are the same as in the original DreamerV2 work (Hafner et al., 2021). Specific details are as outlined here: For the pure MPC-based experiments, we increased the number of MPPI samples from 512 to 1000, the number of top-k from 64 to 100, and the horizon from 5 to 15, to compensate for the absence of the actor network’s samples and the critic’s predictions in the return estimates.
1. What is the focus of the paper regarding reward free learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental analysis? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Can you identify any concerns or questions regarding the paper's claims and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies the problem of reward free learning, where the setup involves a pre-training stage (when an intrinsic reward is used based on different prior methods) and a fine-tuning stage (when a particular extrinsic reward is provided). The claim is that model-based methods perform better than model-free methods in such a setup. The authors perform certain ablations where different components of the model-based method (in this case the actor, critic, and dynamics model of the Dreamer algorithm) are shown to affect fine-tuning performance differently. Finally, the authors deploy a MPC style planning algorithm instead of using the learnt actor when fine-tuning to new rewards. Strengths And Weaknesses Strengths Important and relevant problem setup Fairly thorough experiments Weakness Lack of a coherent story Unclear primary contributions Somewhat hard to read through Clarity, Quality, Novelty And Reproducibility The paper is unclear to read in some parts, is fairly high quality, lacks quite a bit on novelty, and is decently reproducible.
ICLR
Title Unsupervised Model-based Pre-training for Data-efficient Control from Pixels Abstract Controlling artificial agents from visual sensory data is an arduous task. Reinforcement learning (RL) algorithms can succeed in this but require large amounts of interactions between the agent and the environment. To alleviate the issue, unsupervised RL proposes to employ self-supervised interaction and learning, for adapting faster to future tasks. Yet, whether current unsupervised strategies improve generalization capabilities is still unclear, especially in visual control settings. In this work, we design an unsupervised RL strategy for data-efficient visual control. First, we show that world models pre-trained with data collected using unsupervised RL can facilitate adaptation for future tasks. Then, we analyze several design choices to adapt faster, effectively reusing the agents’ pre-trained components, and planning in imagination, with our hybrid planner, which we dub Dyna-MPC. By combining the findings of a large-scale empirical study, we establish an approach that strongly improves performance on the Unsupervised RL Benchmark, requiring 20× less data to match the performance of supervised methods. The approach also demonstrates robust performance on the Real-Word RL benchmark, hinting that the approach generalizes to noisy environments. 1 INTRODUCTION Modern successes of deep reinforcement learning (RL) have shown promising results for control problems (Levine et al., 2016; OpenAI et al., 2019; Lu et al., 2021). However, training an agent for each task individually requires a large amount of task-specific environment interactions, incurring huge redundancy and prolonged human supervision. Developing algorithms that can efficiently adapt and generalize to new tasks has hence become an active area of research in the RL community. In computer vision and natural language processing, unsupervised learning has enabled training models without supervision to reduce sample complexity on downstream tasks (Chen et al., 2020; Radford et al., 2019). In a similar fashion, unsupervised RL (URL) agents aim to learn about the environment without the need for external reward functions, driven by intrinsic motivation (Pathak et al., 2017; Burda et al., 2019a; Bellemare et al., 2016). Any learned models can then be adapted to downstream tasks, aiming to reduce the required amount of interactions with the environment. Recently, the Unsupervised RL Benchmark (URLB) (Laskin et al., 2021) established a common protocol to compare self-supervised algorithms across several domains and tasks from the DMC Suite (Tassa et al., 2018). In the benchmark, an agent is allowed a task-agnostic pre-training stage, where it can interact with the environment in an unsupervised manner, followed by a fine-tuning stage where, given a limited budget of interactions with the environment, the agent should quickly adapt for a specific task. However, the results obtained by Laskin et al. (2021) suggest that current URL approaches may be insufficient to perform well on the benchmark, especially when the inputs of the agent are pixel-based images. World models have proven highly effective for solving RL tasks from vision both in simulation (Hafner et al., 2021; 2019a) and in robotics (Wu et al., 2022), and they are generally data-efficient as they enable learning behavior in imagination (Sutton, 1991). Inspired by previous work on exploration (Sekar et al., 2020), we hypothesize this feature could be key in the unsupervised RL setting, as a pre-trained world model can leverage previous experience to learn behavior for new tasks in imagination, and in our work, we study how to best exploit this feature. We adopt the URLB setup to perform a large-scale study, involving several unsupervised RL methods for pre-training model-based agents, different fine-tuning strategies, and a new improved algorithm for efficiently planning with world models. The resulting approach, which combines the findings of our study, strongly improves performance on the URL benchmark from pixels, nearly achieving the asymptotic performance of supervised RL agents, trained with 20x more task-specific data, and bridging the gap with low-dimensional state inputs (Laskin et al., 2021). Contributions. This work does not propose a novel complex method. Rather, we study the interplay of various existing components and propose a novel final solution that outperforms existing state of the art on URLB by a staggering margin. Specifically: • we demonstrate that unsupervised RL combined with world models can be an effective pre-training strategy to enable data-efficient visual control (Section 3.1), • we study the interplays between the agent’s pre-trained components that improve sample efficiency during fine-tuning (Section 3.2), • we propose a novel hybrid planner we call Dyna-MPC, which allows us to effectively combine behaviors learned in imagination with planning (Section 3.3), • combining our findings into one approach, we outperform previous approaches on URLB from pixels, nearly solving the benchmark (Section 4.1), • we show the approach is resilient to environment perturbations, evaluating it on the Real World RL benchmark (Dulac-Arnold et al., 2020) (Section 4.2), • we present an extensive analysis of the pre-trained agents, aimed at understanding in-depth the current findings and limitations (Section 4.3). An extensive empirical evaluation, supported by more than 2k experiments, among main results, analysis and ablations, was used to carefully design our method. We hope that our large-scale evaluation will inform future research towards developing and deploying pre-trained agents that can be adapted with considerably less data to more complex/realistic tasks, as it has happened with unsupervised pre-trained models for vision (Parisi et al., 2022) and language (Ahn et al., 2022). 1 2 PRELIMINARIES Reinforcement learning. The RL setting can be formalized as a Markov Decision Process (MDP), denoted with the tuple {S,A, T,R, γ}, where S is the set of states, A is the set of actions, T is the state transition dynamics, R is the reward function, and γ is a discount factor. The objective of an RL agent is to maximize the expected discounted sum of rewards over time for a given task, also called return, and indicated as Gt = ∑T k=t+1 γ (k−t−1)rk. In continuous-action settings, you can learn an actor, i.e. a model predicting the action to take from a certain state, and a critic, i.e. a model that estimates the expected value of the actor’s actions over time. Actor-critic algorithms can be combined with the expressiveness of neural network models to solve complex continuous control tasks (Haarnoja et al., 2018; Lillicrap et al., 2016; Schulman et al., 2017). 1The PyTorch code for the experiments will be open-sourced upon publication. Unsupervised RL. In this work, we investigate the problem of fast adaptation for a downstream task, after a phase of unsupervised training and interaction with the environment. Our training routine, based on the setup of URLB (Laskin et al., 2021), is made of two phases: a pre-training (PT) phase, where the agent can interact with a task-agnostic version of the environment for up to 2M frames, and a fine-tuning phase (FT), where the agent is given a task to solve and a limited budget of 100k frames. During the PT phase, rewards are removed so that sensible information about the environment should be obtained by exploring the domain-dependent dynamics, which is expected to remain similar or unchanged in the downstream tasks. During FT, the agent receives task-specific rewards when interacting with the environment. As the agent has no prior knowledge of the task, it should both understand the task and solve it efficiently, in a limited interaction budget. In this setting, the performance of unsupervised model-free RL (Yarats et al., 2022) were shown to be insufficient as reported in (Laskin et al., 2021). We believe the key reason for this is that model-free RL algorithms can exploit only a little part of the information obtained with self-supervised interaction, as they rely uniquely on actor and critic’s predictions. World models. In this work, we ground upon the DreamerV2 agent (Hafner et al., 2021), which learns a world model (Ha & Schmidhuber, 2018; Hafner et al., 2019b) predicting the outcomes of actions in the environment. The dynamics is captured into a latent space Z , providing a compact representation of the high-dimensional inputs. The world model consists of the following components: Encoder: et = fϕ(st), Decoder: pϕ(st|zt), Dynamics: pϕ(zt|zt−1, at−1), Posterior: qϕ(zt|zt−1, at−1, et). The model states zt have both a deterministic component, modeled using the recurrent state of a GRU (Chung et al., 2014), and a (discrete) stochastic component. The encoder and decoder are convolutional neural networks (CNNs) and the remaining components are multi-layer perceptrons (MLPs). The world model is trained end-to-end by optimizing an evidence lower bound (ELBO) on the log-likelihood of the data collected in the environment (Hafner et al., 2019b;a). For the encoder and the decoder networks, we used the same architecture as in Hafner et al. (2021). For control, the agent learns latent actor πθ(at|zt) and critic vψ(zt) networks. Both components are trained online within the world model, by imagining the model state outcomes of the actions produced by the actor, using the model dynamics. Rewards for imagined trajectories are provided by a reward predictor, pϕ(rt|zt) trained to predict environment rewards, and they are combined with the critic predictions to produce a GAE-λ estimate of the returns (Schulman et al., 2016). The actor maximizes estimates of returns, backpropagating gradients through the model dynamics. The hyperparameters for the agent, which we keep fixed across all domains/tasks, can be found in Appendix H. 3 UNSUPERVISED MODEL-BASED PRE-TRAINING FOR DATA-EFFICIENT CONTROL FROM PIXELS To best exploit self-supervised pre-training for data-efficient adaptation, it is important that the agent: (i) meaningfully interacts with the environment during the PT phase, to discover useful transitions; (ii) successfully reuses the modules learned during PT for fast adaptation; and (iii) efficiently employs the FT phase to quickly understand and master the downstream task. In this section, we use an experiment-driven approach to find which methods or components are best at tackling these challenges. Experimental procedure. We employ the URL benchmark that consists of three control domains, Walker, Quadruped and Jaco, and twelve tasks, four per domain. To evaluate the agents, we take snapshots of the agent at different times during training, i.e. 100k, 500k, 1M, and 2M frames, and finetune the agent for 100k frames. In all bar plots, we show average normalized returns on downstream tasks with error bars showing the standard deviation. To normalize results in a comparable way for all tasks, we train a fully-supervised agent with 2M frames per task. We use the mean performance of this agent, which we refer to as "oracle", as the reference scores to normalize our results in the plots (details in Appendix A). For all experiments, results are presented with at least three random seeds. 3.1 UNSUPERVISED PRE-TRAINING In the PT stage, unsupervised RL can be used to explore the environment, collecting the data to train the components of the agent. The resulting networks are then used to initialize respective components in the agent deployed for the downstream task, aiming to reduce sample complexity during FT. The first question we address is thus "What kinds of agents work best with unsupervised pre-training?". Unsupervised RL methods can be grouped into three categories (Laskin et al., 2021): knowledgebased, which aim to increase the agent’s knowledge by maximizing error prediction (Pathak et al., 2017; 2019; Burda et al., 2019b), data-based, which aim to achieve diversity of data (Yarats et al., 2021; Liu & Abbeel, 2021b) and competence-based, which aim to learn diverse skills (Liu & Abbeel, 2021a; Eysenbach et al., 2019). In Figure 2a we report the results from Laskin et al. (2021), showing that none of these approaches is particularly effective on URLB when combined with the DrQ model-free agent (Yarats et al., 2022), state-of-the-art in RL from pixels, where the data collected with unsupervised RL is used to pre-train the agent’s actor, critic, and encoder. To demonstrate that world models can be used to effectively exploit unsupervised RL data collection for fast adaptation, we study multiple approaches and use them to pre-train the Dreamer’s world model and latent actor. As knowledge-based methods we employ ICM (Pathak et al., 2017), LBS (Mazzaglia et al., 2021b), Plan2Explore (P2E; (Sekar et al., 2020)), and RND (Burda et al., 2019b). As a data-based approach, we choose APT (Liu & Abbeel, 2021b), and as competence-based approaches, we adopt DIAYN (Eysenbach et al., 2019) and APS (Liu & Abbeel, 2021a). Finally, we also test random actions, as a naive maximum entropy baseline (Haarnoja et al., 2018). Details on these methods and how we combined them with the Dreamer algorithm are discussed in Appendix B. Aggregating results per category, in Figure 2b, we show that by leveraging a pre-trained world model the overall performance improves over time for all categories, as opposed to the model-free results, where only knowledge-based approaches slightly improve. In particular, data-based and knowledge-based methods are more effective in the Walker and Quadruped domains, and random actions and competence-based are more effective in the Jaco domain. Detailed results for each method are available in Appendix E. 3.2 FINETUNING PRE-TRAINED AGENTS Some of the components learned during the PT phase, such as the world model, can be reused for fast adaptation during FT. However, as the reward is changing from pseudo-reward to task reward when changing from the PT to the FT phase, it is not clear if pre-training of the actor and critic can help the downstream task. To shed light on this, we seek to answer: "Which pre-trained components are useful for downstream tasks?". Here, we test different fine-tuning configurations, where we copy the weights of some of the PT components into the agent to fine-tune for the downstream task. We run the tests for the several unsupervised RL methods combined with Dreamer that we presented in Section 3.1 and show aggregated results in Figure 3 (detailed results per each method in Appendix E). Overall, fine-tuning the PT world model provides the most significant boost in performance, strengthening the hypothesis that world models are very effective with unsupervised RL. Fine-tuning the actor improves performance slightly in Walker and remarkably in Quadruped, but is harmful in the Jaco domain. An intuitive explanation is that in the Quadruped and Walker moving tasks, the exploratory behaviors help discovering reward faster. Instead, in the Jaco goal-reaching tasks, the agent needs to reach a certain target with sparse rewards. If the PT actor is initialized to move far from the target, the agent might struggle to find rewards in the small FT budget. Finally, using a PT critic is systematically worse. This can be explained by the discrepancy between intrinsic rewards and task rewards. 3.3 LEARNING AND PLANNING IN IMAGINATION Knowing a model of the environment, traditional model-based control approaches, e.g. model predictive control (MPC) (Williams et al., 2015; Chua et al., 2018; Richards, 2005), can be used to plan the agent’s action. Nonetheless, using actor-critic methods has several advantages, such as amortizing the cost of planning by caching previously computed (sub)optimal actions and computing long-term returns from a certain state, without having to predict outcomes that are far in the future. More recent hybrid strategies, such as LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022), allow combining trajectories sampled from the actor with trajectories sampled from a distribution over actions that is iteratively improved. The model and the critic are used to evaluate the trajectories, Algorithm 1 Dyna-MPC Require: Actor θ, Critic ψ, World Model ϕ 1: µ, σ: initial parameters for sampling actions 2: N,Nπ: num trajectories, num policy trajectories 3: zt, H: current model state, planning horizon 4: for each iteration j = 1..J do 5: Sample N trajectories of length H from N (µ, σ2I), starting from zt 6: Sample Nπ trajectories of length H using the actor πθ, starting from zt 7: Estimate future states, using the model, and returns, using reward and critic predictions 8: Update µ and σ using MPPI (Williams et al., 2015) 9: end for 10: return at ∼ N (µt, σ2t I) improve them, and eventually select the most promising actions, i.e. planning. In this section, we answer the question: Can we accelerate downstream task adaptation by leveraging planning? Dyna-MPC. As we pre-train a world model, we could exploit planning in latent space to adapt with limited additional environment interaction. One problem with the above strategies is that they are based upon learning off-policy actor and critic, which in our context would prevent us from exploiting the PT model to learn the actor and critic in imagination. In order to enable hybrid planning with the behavior learned in imagination (Hafner et al., 2019a), we develop a modification of these approaches, which we call Dyna-MPC, that combines the actor and critic learned in imagination with MPPI (Williams et al., 2015) for planning. As detailed in Algorithm 1, at each time step, we imagine a set of latent trajectories using the model, by sampling actions from a time-dependent multivariate gaussian and from the actor policy, trained with Dreamer in imagination. Returns for MPPI are estimated using reward predictions by the model and the critic. MPPI is used to update the parameters of the multivariate gaussian for J iterations. Details on how returns are estimated and the MPPI updates work are given in Appendix C. One significant difference with previous approaches is that the policy in Dyna-MPC is learned on-policy in imagination, thus no correction for learning off-policy is required (Sikchi et al., 2020). Given the insights from the previous section, we use the world models and actors pre-trained with all the different unsupervised strategies we considered (see Section 3.1)2 and test their FT performance with and without planning with Dyna-MPC. Aggregated scores are reported in Figure 4, and detailed results for each method are available in Appendix E. We observe that adopting Dyna-MPC is always beneficial, as it improves the average performance and reduces variance in all domains. 3.4 OUR METHOD: COMBINING THE FINDINGS TOGETHER In the large-scale study, we explored several design choices to establish the most adequate approach to tackle the URL benchmark, aiming to provide a general recipe for data-efficient adaptation thanks to unsupervised RL. Our approach combines the main findings we presented in the previous sections: 1. learning a model-based agent with data collected using unsupervised RL (Figure 2); 2. fine-tuning the PT world model (always) and the pre-trained actor (where beneficial), while learning the critic from scratch (Figure 3); 3. adopting a hybrid planner, as the proposed Dyna-MPC, to leverage both learning and planning in imagination (Figure 4). An overview of the method is illustrated in Figure 1 and the algorithm is presented in Appendix D. We believe the above recipe could be generally applied to unsupervised settings, also outside of URLB, with the precaution that one should carefully make two decisions: (a) whether fine-tuning the PT actor is meaningful for the downstream task or it’s better to re-learn it from scratch, (b) what is the best URL strategy to collect data. Both decisions strongly depend on the target domain/task and so it is difficult to assess their implications beforehand. However, adopting unsupervised strategies that specifically focus on interacting with interesting elements of the environment, e.g. objects, or that quickly explore large areas of the environment at the beginning of fine-tuning may help exploring and revisiting crucial states of the environment more easily (Parisi et al., 2021). For URLB, we already established (a) that the PT actor is effective in Walker and Quadruped tasks, but it is better re-learn the actor from scratch in Jaco, in Section 3.2. To decide which URL strategy to use (b) we present a detailed comparison of the performance of our approach using different exploration strategies. The results in Figure 5 show that the agent using LBS during pre-training performs overall best, as it has the highest interquartile mean (IQM) and mean scores, and the lowest optimality gap. Thus, in the evaluation section, we present Ours (LBS) as our approach. 4 EVALUATION AND ANALYSIS 4.1 UNSUPERVISED REINFORCEMENT LEARNING BENCHMARK In Section 3, we presented our approach, which combines the findings from our empirical large-scale study on URLB. In Figure 6, we compare the results from the original URLB paper with our approach. The performance of our method is superior in all domains. The second strongest method (DrQ with Disagreement) approaches an overall performance of 40% of the respective supervised baseline performance, while our method recovers more than 90% of its supervised counterpart. 4.2 REAL-WORLD REINFORCEMENT LEARNING BENCHMARK Algorithms developed in simulation struggle to transfer to real-world systems due to a series of implicit assumptions that are rarely satisfied in real environments, e.g. URLB assumes the dynamics between PT and FT stay the same. The RWRL benchmark (Dulac-Arnold et al., 2020) considers several challenges that are common in real-world systems and implements them on top of DMC tasks. We employ vision-based variants of the Walker Walk and Quadruped Walk tasks from the RWRL benchmark. These tasks introduce system delays, stochasticity, and perturbations of the robot’s model and sensors, which are applied with three degrees of intensity to the original environment, i.e. ‘easy’, ‘medium’, and ‘hard’ (details in Appendix F). We seek to answer whether in perturbed settings: • does unsupervised PT enable faster adaptation? • does unsupervised RL provide an advantage over random exploration? • does hybrid planning improve performance, as in URLB? In Figure 7, we present the results of our method, using LBS during PT, with and without planning with Dyna-MPC for FT, and compare to random exploration and training from scratch for 100k, 1M, and 2M frames. Crucially, the PT models are trained in the vanilla task-agnostic version of the environments from the DMC Suite, so that the results highlight the extent to which models trained in ideal conditions generalize to perturbed settings when fine-tuned in a low-data regime. 2We exceptionally do not use the pre-trained actor in the Jaco tasks, as this was shown to lead to better performance in Section 3.2 (Figure 3). Overall, we found that fine-tuning PT models offer an advantage over training from scratch for 100k frames, despite all the variations in the environment. Furthermore, on the Quadruped Easy and Medium settings, our method performs better than Dreamer@1M and not far from Dreamer@2M while using 10x and 20x less task-specific data, respectively. Our method also performs close to Dreamer@1M/2M in the Walker Easy task. Unsupervised RL for data collection (Ours) outperforms random actions in the ‘easy’ and ‘medium’ settings, showing that a better PT model yields higher FT performance, even when the dynamics of the downstream task is affected by misspecifications and noisy factors. Finally, in contrast with the findings on URLB, adopting the hybrid planner is not generally beneficial. We believe this is because the model’s predictions are less certain and precise in this setting and thus cannot inform the short-term planner accurately. 4.3 EXTENDED ANALYSIS To better analyze the learned components, we conducted a range of additional experiments. For conciseness, detailed descriptions of the experimental settings are deferred to Appendix G and we briefly summarize the takeaways in this section. Learning rewards online. We verify whether having to discover and learn the reward function during FT impacts performance. In Figure 8, we compare against agents that (violating the URLB settings) know the task in advance and can pre-train a reward predictor during the PT stage. We see that learning the reward predictor does not affect performance significantly for dense-reward tasks, such as the Walker and Quadruped tasks. However, in sparser reward tasks, i.e. the Jaco ones, knowing reward information in advance provides an advantage. Efficient strategies to find sparse rewards efficiently represent a challenge for future research. More details in Appendix G.1. Zero-shot adaptation. Knowing a reward predictor from PT, it could be possible to perform zero-shot control with MPC methods if the model and the reward function allow it. In Figure 9, we show that despite the zero-shot MPC (ZS) offers an advantage over Dreamer@100k, the FT phase is crucial to deliver high performance on the downstream tasks, as the agent uses this phase to collect missing information about the environment and the task. Further details in Appendix G.2. Latent dynamics discrepancy (LDD). We propose a novel metric, Latent Dynamics Discrepancy, which evaluates the distance between the latent predictions of the PT model and the same model after FT on a task. In Figure 10, we show the correlation between our metric and the performance ratio between using the PT model and the FT model for planning (see Appendix G.3 for a detailed explanation). We observed a strong negative Pearson correlation (−0.62, p-value: 0.03), highlighting that major updates in the model dynamics during FT played an important role in improving performance. Unsupervised rewards and performance. We analyze the correlation between the normalized performance of different agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent in Table 1. In particular, the correlation for LBS, which overall performs best in URLB, has a statistical significance, as its p-value is < 0.05. We believe this correlation might be one of the causes of LBS outstanding performance. Further insights are provided in Appendix G.4. 5 RELATED WORK Model-based control. Dynamics models combined with powerful search methods have led to impressive results on a wide variety of tasks such as Atari (Schrittwieser et al., 2020) and continuous control (Hafner et al., 2019a; Janner et al., 2019; Sikchi et al., 2021; Lowrey et al., 2018). LOOP (Sikchi et al., 2020) and TD-MPC (Hansen et al., 2022) combine temporal difference learning and MPC. The model proposed with TD-MPC is task-oriented and thus requires a task to accelerate learning. In our work, we focus on unsupervised model learning, grounding on the DreamerV2 model (Hafner et al., 2021), whose supervision comes from predicting the environment’s observations. Methods that use no reconstruction could generalize better to visual differences (Mazzaglia et al., 2021a; Ma et al., 2020) but they lose in explainability, as they cannot decode imagined trajectories. Unsupervised RL. Prior to our work, the large-scale study of curiosity (Burda et al., 2018) provided an insightful analysis of the performance of knowledge-based methods in the reward-free setting. In our work, we leverage the URLB setting, to provide an analysis of a combination of model-based control techniques with unsupervised RL. This allowed us to formulate a strategy to adapt pre-trained models to visual control tasks in a data-efficient manner. Closely, Sekar et al. (2020) combines adapts the Disagreement (Pathak et al., 2019) to work with Dreamer (Hafner et al., 2019a). In our work, in addition to analyzing a wider choice of unsupervised RL strategies, we show how to better exploit the agent PT components for adaptation, and we propose a hybrid planner to improve data-efficiency. Transfer learning. In the field of transfer learning, fine-tuning is the most used approach. However, fine-tuning all the pre-trained agent components may not be the most effective strategy. In transfer learning for RL, they have studied this problem, mainly with the objective of transferring from one environment to another (Farebrother et al., 2018; Sasso et al., 2022; van Driessel & Francois-Lavet, 2021). Instead, we analyze which agent’s components should be transferred from the unsupervised PT stage to the supervised FT stage when the environment’s dynamics is assumed to stay similar or be the same. Another stream of work has studied successor representations, to enable a better transfer of the agent’s actor-critic (Hansen et al., 2020; Barreto et al., 2016). 6 CONCLUSION In order to accelerate the development and deployment of learning agents for real-world tasks, it is crucial that the employed algorithms can adapt in a data-efficient way for multiple tasks. Our study provides an empirical analysis of several design choices, which allowed us to obtain near-optimal performance in URLB and that showed robustness to perturbations in the environment, on the RWRL benchmark. We also analyzed several aspects of the learned models, to understand what could be improved further in the future to ease the adaptation process. Limitations. In the Jaco reaching tasks, we found that a bad initialization of the pre-trained actor can actually harm the agent’s performance. While competence-based approaches should address this limitation, by learning a variety of skill behaviors, their performance on the other domains has been subpar. Future work should aim to find a more general approach to pre-train behavior for fast adaptation or improve the exploration capabilities of competence-based approaches. Another issue we encountered, on the RWRL benchmark, is that if the environment introduces too intense perturbations during adaptation, relying on the predictions of the adopted world model becomes problematic, to the extent that exploiting a planner is not useful anymore. Developing more resilient models that can be trained in an unsupervised fashion and used for data-efficient planning, even in presence of complex perturbations, will be the focus of future studies. Reproducibility statement We reported in the main text (Algorithm 1) the pseudo-code for DynaMPC and in Appendix D the pseudo-code for our end-to-end approach. We also provide instructions on how we implemented our methods (Appendix B) and all the model and training hyperparameters to implement and reproduce the results (Table 4). We will release our code and scripts. A NORMALIZATION SCORES In Table 2, we report the mean scores for the URLB Expert, used to normalize the scores in the URLB paper, and for Dreamer@2M, which we use to normalize returns of our methods, where both supervised baselines have been trained individually on each of the 12 tasks from URLB for 2M frames. We additionally report mean and standard deviations for the best performing unsupervised baseline from URLB. which is Disagreement (Pathak et al., 2019), and our method (using LBS for data collection). We notice that our scores approach the Dreamer@2M’s scores in several tasks, eventually outperforming them in a few tasks (e.g. Walker Flip, Quadruped Jump). We believe this merit is due both to the exploration pre-training, which may have found more rewarding trajectories than greedy supervised RL optimization and of the improved Dyna-MPC planning strategy. B INTEGRATING UNSUPERVISED RL STRATEGIES We summarize here the unsupervised RL approaches tested and how we integrated them with the Dreamer algorithm for exploration. For all methods, rewards have been normalized during training using an exponential moving average with momentum 0.95, with the exceptions of RND, which follows its original reward normalization (Burda et al., 2019b), and APS, whose rewards are not normalized because they are used to regress the skill that is closer to the downstream task during FT. ICM. The Intrinsic Curiosity Module (ICM; Pathak et al. (2017)) defines intrinsic rewards as the error between states projected in a feature space and a feature dynamics model’s predictions. We use the Dreamer agent encoder et = fϕ(st) to obtain features and train a forward dynamics model g(et|et−1, at−1) to compute rewards as: rt ICM ∝ ∥g(et|et−1, at−1)− et∥2. As the rewards for ICM require environment states (going through the encoder to compute prediction error), we train a reward predictor to allow estimating rewards in imagination. Plan2Explore. The Plan2Explore algorithm (Sekar et al., 2020) is an adaptation of the Disagreement algorithm (Pathak et al., 2019) for latent dynamics models. An ensemble of forward dynamics models is trained to predict the features embedding et = fϕ(st), given the previous latent state and actions, i.e. g(et|zt−1, at−1, wk), where wk are the parameters of the k-th predictor. Intrinsic rewards are defined as the variance of the ensemble predictions: rt P2E ∝ Var({g(et|zt−1, at−1, wk)|k ∈ [1, ...,K]}). Plan2Explore requires only latent states and actions, thus it can be computed directly in imagination. We used an ensemble of 5 models. RND. Random Network Distillation (RND; Burda et al. (2019b)) learns to predict the output of a randomly initialized network n(st) that projects the states into a more compact random feature space. As the random network is not updated during training, the prediction error should diminish for already visited states. The intrinsic reward here is defined as: rt RND ∝ ∥g(st)− n(st)∥2 As the rewards for RND requires environment states (to encode with the random network), we train a reward predictor to allow estimating rewards in imagination. LBS. In Latent Bayesian Surprise (LBS; Mazzaglia et al. (2021b)), they use the KL divergence between the posterior and the prior of a latent dynamics model as a proxy for the information gained with respect to the latent state variable, by observing new states. Rewards are computed as: rt LBS ∝ DKL[q(zt|zt−1, at−1, et)∥p(zt|zt−1, at−1)] As the rewards for LBS requires environment states (to compute the posterior distribution), we train a reward predictor to allow estimating rewards in imagination. APT. Active Pre-training (APT; Liu & Abbeel (2021b)) uses a particle-based estimator based on the K nearest-neighbors algorithm (Singh et al., 2003) to estimate entropy for a given state. We implement APT on top of the deterministic component of the latent states z̄t, providing rewards as: rt APT ∝ k∑ i log ∥z̄t − z̄it∥2, where k are the nearest-neighbor states in latent space. As APT requires only latent states, it can be computed directly in imagination. We used k = 12 nearest neighbors. DIAYN. Diversity is All you need (DIAYN; Eysenbach et al. (2019)) maximizes the mutual information between the states and latent skills w. We implement DIAYN on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(wt)−H(wt|zt). The entropy H(wt) is kept maximal by sampling wt ∼ Unif(wt) from a discrete uniform prior distribution, while H(wt|zt) is estimated learning a discriminator q(wt|zt). We compute intrinsic rewards as: rt DIAYN ∝ log q(wt|zt) Additionally, DIAYN maximizes the entropy of the actor, so we add an entropy maximization term to Dreamer’s objective (Haarnoja et al., 2018). As DIAYN requires model states and skills sampled from a uniform distribution to compute rewards, we can directly compute them in imagination. For FT, the skill adapted is the one with the highest expected rewards, considering the states and rewards obtained in the initial episodes. APS. Active Pre-training with Successor features (APS; Liu & Abbeel (2021a)) maximizes the mutual information between the states and latent skills w. We implement APS on top of the latent space of Dreamer, writing the mutual information as I(wt, zt) = H(zt)−H(zt|wt). The entropy term H(zt) is estimated using a particle-based estimator on top of the deterministic component of the latent states z̄t, as for APT, while the term H(zt|wt) is estimated learning a discriminator q(zt|wt). The intrinsic rewards for APS can be written as: rt APS ∝ rtAPT + log q(wt|zt) As APS requires model states and uniformly sampled skills to compute rewards, we can directly compute them in imagination. For FT, the skill to adapt is selected using linear regression over the states and rewards obtained in the initial episodes (Liu & Abbeel, 2021a). C DYNA-MPC To further improve data efficiency, we chose to use an hybrid planner that combines reinforcement learning and MPC (Hansen et al., 2022; Sikchi et al., 2020; Lowrey et al., 2018). Previous works leveraged model-free off-policy algorithms (Hansen et al., 2022; Sikchi et al., 2020) to learn the actor and critic in a more computationally efficient manner. The policy used to act on the environment combines action samples from the actor network with MPC, while the critic and the actor are learned "offline" from previously collected data. This has several benefits but also leads to an issue referred to as “actor divergence" (Sikchi et al., 2020), which consists of the policy used for data collection being different from the policy that is used to learn the critic. In our study, we found that using the PT world model to learn the actor and the critic is crucial to improve data-efficiency during FT (see Figure 3). Thus, we discard the option of learning the actor and critic with off-policy deep RL. Instead, we design a new hybrid planner, which we call Dyna-MPC, that learns actor and critic functions in the model imagination (Sutton, 1991), using the Dreamer algorithm (Hafner et al., 2019a), and then combines their predictions with MPPI (Williams et al., 2015) for acting on the environment. By doing so we mitigate the "actor divergence" issue as actor and critic are learned on-policy on the trajectories generated with the model. The critic is learned in the model’s imagination, computing the expected value of the actor’s actions using GAE-λ estimates of the returns (Schulman et al., 2016; Hafner et al., 2019a): V λt = rt + γt { (1− λ)vψ(zt+1) + λV λt+1 if t < H, vψ(zH) if t = H, (1) where rt is the reward for state zt, yielded by the reward predictor of the world model, and H is the imagination horizon. When computing returns for MPPI we use the same return estimates. At each time step, we use MPPI to select the best action. MPPI iteratively fits the parameters of a time-dependent multivariate Gaussian distribution with diagonal covariance, updating mean and standard deviation parameters using an importance weighted average of the top-k trajectories with the highest estimated returns. At every step, N trajectories Γi = {a0,i, a1,i, ..., aH,i} of length H are obtained sampling actions from the distributions at ∼ N (µt, σ2t I) and Nπ trajectories are sampled from the actor network at ∼ πθ(at|zt) and their outcomes are predicted using the model. At each MPPI iteration, the distribution parameters are updated as follows: µ = ∑k i=1 ΩiΓ ⋆ i∑N i=1 Ωi , σ = max √√√√∑Ni=1 Ωi(Γ⋆i − µ)2∑N i=1 Ωi , ϵ , (2) where Ωi = exp(τV λi ), τ is a temperature parameter, ⋆ indicates the trajectory is in the top-k, and ϵ is a clipping factor to avoid too small standard deviations (Hansen et al., 2022). To reduce the number of iterations required for convergence, we reuse the 1-step shifted mean obtained at the previous timestep (Argenson & Dulac-Arnold, 2020). D ALGORITHM Algorithm 2 Unsupervised Model-based Pre-Training for Data-efficient Control from Pixels Require: Actor θ, Critic ψ, World Model ϕ 1: Intrinsic reward rint, extrinsic reward rext 2: Environment, M , downstream tasks Tk, k ∈ [1, . . . ,M ] 3: Pre-train frames NPT, fine-tune frames NFT, environment frames/update τ 4: Initial model state z0, hybrid planner Dyna-MPC, replay buffers DPT, DFT 5: 6: // Pre-training 7: for t = 0, . . . , NPT do 8: Draw action from the actor, at ∼ πθ(at|zt) 9: Apply action to the environment, st+1 ∼ P (·|st,at) 10: Add transition to replay buffer, DPT ← DPT ∪ (st,at, st+1) 11: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 12: if t mod τ = 0 then 13: Update world model parameters ϕ on the data from the replay buffer DPT 14: Update actor-critic parameters {θ, ψ} in imagination, maximizing rint 15: end if 16: end for 17: Output pre-trained parameters {ψPT, θPT, ϕPT} 18: 19: // Fine-tuning 20: for Tk ∈ [T1, . . . , TM ] do 21: Initialize fine-tuning world-model with ϕPT 22: (Optional) Initialize fine-tuning actor with θPT 23: for t = 0, . . . , NFT do 24: Draw action from the actor, at ∼ πθ(at|zt) 25: Use the planner for selecting best action, at ∼ Dyna-MPC(zt) 26: Apply action to the environment, st+1, rextt ∼ P (·|st,at) 27: Add transition to replay buffer, DFT ← DFT ∪ (st,at, rextt , st+1) 28: Infer model state, zt+1 ∼ q(zt+1|zt, at, fϕ(st+1)) 29: if t mod τ = 0 then 30: Update world model parameters ϕ on the data from the replay buffer DFT 31: Update actor-critic parameters {θ, ψ} in imagination, maximizing rext 32: end if 33: end for 34: Evaluate performance on Tk 35: end for E ADDITIONAL RESULTS We present complete results, for each unsupervised RL method, for the large-scale study experiments presented in Section 3. Can a pre-training stage longer than 2M frames be beneficial? In Figure 14, we report FT results with our full method, every 1M frames up to 5M PT frames. The aggregated results show that, adopting our method, longer PT can increase performance further, especially until 4M steps. The performance in all domains keeps increasing or remains steady until 5M steps, with two exceptional cases, Walker for Plan2Explore and Jaco for APS, where performance drops between 4M and 5M steps. For these experiments, we kept the size of the model and all the hyperparameters unvaried with respect to the 2M PT frames experiments but we increased the replay buffer maximum size to 5M frames. Increasing model capacity, and adopting additional precautions, such as annealing learning rate, it is possible that the agent could benefit even more from longer pre-training and we aim to analyse this more in details for future work. F RWRL SETTINGS We take the Quadruped and Walker tasks from the RWRL benchmark and replace the low-dimensional sensor inputs with RGB camera inputs. While this removes some of the perturbations planned in the benchmark (Dulac-Arnold et al., 2020), such as noise in the sensors, it introduces the difficulty of a different dynamics in pixel space (due to the other perturbations), compared to the one observed during pre-training in the vanilla simulation environment. G EXTENDED ANALYSIS We note that, to run the experiments faster, we did not use Dyna-MPC for the extended analysis. Furthermore, the Jaco tasks used slightly differ from the original ones in URLB, only in that the target to reach cannot move. This allows consistency of the reward function between PT and FT, so that a reward predictor can be trained on ‘reward-labelled’ PT data. However, because of this change, the performance in Jaco may differ from the other main results (particularly in Figure 8 and Figure 9). G.1 LEARNING REWARDS ONLINE In Figure 8 of the main text, we measure the gap in performance between pre-trained agents that have no knowledge of the reward function at the beginning of fine-tuning and agents whose reward predictor is initialized from a reward predictor learned on top of the unsupervised pre-training data (violating the URLB settings). Crucially, the agent during unsupervised PT can learn the reward predictor without affecting neither the model learning or the exploration process. To not affect the model, gradients are stopped between the reward predictor and the rest of the world model. To not affect exploration, the rewards used to train the agent’s actor and critic remain the intrinsic rewards, for exploration. G.2 ZERO-SHOT ADAPTATION Using agents that have access to a PT reward predictor, we explore the idea of zero-shot adaptation using MPC, which is trying to solve the URLB tasks using only planning and the pre-trained world model and reward predictor. In order to obtain good performance, this assumes that the model correctly learned the dynamics of the environment and explored rewarding transitions that are relevant to the downstream task, during pre-training. In Figure 9 of the main text, we compare the results of performing MPC in a zero-shot setting (ZS) with the performance of an MPC agent that is allowed 100k frames for fine-tuning (FT). As for the MPC method, we employ MPPI (Williams et al., 2015). Because these experiments are particularly expensive to run, we just them on the agents trained with the Plan2Explore URL approach. We observe that the performance of zero-shot MPC is generally weak. While it overall performs better than the non-pre-trained model, simply applying MPC leveraging the pre-trained world model and reward predictor trained on the pre-training stage data is not sufficient to guarantee satisfactory performance. The fact that exploiting the fine-tuning stage using the same MPC approach generally boosts performance demonstrates that the model has a major benefit from the FT stage. Still, the performance of MPC generally lacks behind the actor-critic performance, suggesting that, especially in a higher-dimensional action space such as the Quadruped one, amortizing the cost of planning with actor-critic seems crucial to achieve higher performance. G.3 LATENT DYNAMICS DISCREPANCY Model misspecification is a useful measure to assess the uncertainty or inaccuracy of the model dynamics. It is computed as the difference between the dynamics predictions and the real environment dynamics. The metric helps build robust RL strategies, that take the dynamics uncertainty into account while searching for the optimal behavior (Talvitie, 2018). However, with pixel-based inputs the dynamics of the environment are observed through high-dimensional images. And this in-turn could hurt the metric evaluation, since the distances in pixel space can be misleading. In our approach, we use a model-based RL agent that learns the dynamics model in a compact latent space Z . Our novel metric, Latent Dynamics Discrepancy (LDD), quantifies the “misspecification" of the learned latent dynamics accordingly. The metric quantifies the distance between the predictions of the pre-trained model and the same model after fine-tuning on a downstream task. However, as the decoder of the world model gets updated during fine-tuning, the latent space mapping between model states z and environment states s might drift. For this reason, we freeze the agent’s decoder weights, so that the model can only improve the posterior and the dynamics. This ensures that the mapping Z −→ S remains unchanged and allows to compare the dynamics model after fine-tuning with the one before fine-tuning. In order to measure the distance between the distribution output by the dynamics network, we chose the symmetrical Jensen-Shannon divergence: LDD = E(zt,at) [ DJS[pFT(zt+1|zt, at)∥pPT(zt+1|zt, at)] ] , (3) where the expectation is taken over the previous model states zt sampled from the fine-tuned posterior qFT(zt), actions at−1 sampled from an oracle actor π∗(at|zt), so that we evaluate the metric on optimal trajectories, whose environment’s state distribution corresponds to the stationary distribution induced by the actor st ∼ dπ ∗ (st). We used 30 trajectories per task in our evaluation. We observe in our experiments that there exists a correlation between the metric and the performance ratio between a zero-shot model and a fine-tuned model (see Figure 10 in the main paper). The key observation is that major updates in the model dynamics during fine-tuning phase played an important role in improving the agent’s performance, compared to the pre-trained model and zero-shot performance. Future research may attempt to reduce such dependency by either improving the model learning process, so that the pre-trained dynamics could have greater accuracy, or the data collection process, proposing URL methods that directly aid to reduce such uncertainty. G.4 UNSUPERVISED REWARDS AND PERFORMANCE We further analyzed the correlation between the normalized performance of the different exploration agents and their intrinsic rewards for optimal trajectories obtained by an oracle agent. A strong negative correlation between the two factors should indicate that the agent is more interested in seeing the optimal trajectories when its performance is low on the task. We observe that there is negative correlation between Plan2Explore (P2E), ICM, LBS’s performance and their intrinsic rewards, while we found∼0 correlation for RND (see Table 1 in the main text). Out of the methods tested, LBS significantly demonstrated the correlation, as its p-value is < 0.05. This is likely one of the key factors for the high performance of the agent using LBS on the benchmark. One possible explanation is that LBS searches for transitions of the environment that are difficult to predict for the dynamics, so the model likely learns those transitions more accurately, facilitating planning during the fine-tuning stage. Another potential explanation is that, given the high correlation between intrinsic and extrinsic rewards, the actor initialized by LBS performs better at the beginning of FT, speeding up adaptation. H HYPERPARAMETERS Most of the hyperparameters we used for world-model training are the same as in the original DreamerV2 work (Hafner et al., 2021). Specific details are as outlined here: For the pure MPC-based experiments, we increased the number of MPPI samples from 512 to 1000, the number of top-k from 64 to 100, and the horizon from 5 to 15, to compensate for the absence of the actor network’s samples and the critic’s predictions in the return estimates.
1. What is the focus of the paper regarding unsupervised pretraining of reinforcement learning agents? 2. What are the strengths and weaknesses of the proposed approach compared to existing techniques? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or suggestions regarding the experimental results and their interpretation? 5. Can the authors provide additional insights into the impact of different design choices and their contribution to the overall performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper considers the task of unsupervised pretraining of the reinforcement learning (RL) agents. It analyses and compares different design choices for pretraining and fine-tuning the pretrained components; it shows the improvement of using proposed model-based reinforcement learning over existing model-free technique, DrQ. Strengths And Weaknesses Strengths: The paper is really well written: the narrative is shaped around choosing different components for pre-training and fine-tuning, with the structure of the overall approach neatly summarised in Section 3.4 It contains compelling evidence of benefits of using proposed model-based reinforcement learning method over existing state-of-the-art model free technique The analysis includes necessary justification of the architecture: choice of the best possible pretraining model Weaknesses: The model is engineered out of the existing components (apart from incremental, in a positive sense of this word, contribution of Dyna-MPC); this is not, however, a problem at all in my opinion as we need large-scale studies showing the impact of architectural choices, and this is a good contribution for the community and this very knowledge of how to improve architectural choices is a novel aspect of the paper Clarity, Quality, Novelty And Reproducibility Clarity: the paper is really well-written, with the overall structure summarised concisely in Section 3.4 Novelty of the work is in a large scale evaluation; although the work is engineered out of the existing components, the reviewer thinks that it is important that the work shows the impact of architectural choices and therefore serves a good contribution. Comments: In figure 2, it appears that while the proposed procedure wins over results from Laskin et al (2021) by a big margin, the results still keep improving even after 2M frames pretraining; do the authors know what would happen beyond 2M frames, and if there is a point where no more pretraining iterations give improvement in fine-tuning? Does it stabilise, or does it lead to some form of overfitting degrading performance after certain point? Section 3.2: what is the impact of finetuning just actor, without fine-tuning the model? In Figure 6, is it possible to add the baseline of the proposed method ablation, where there is no pretraining phase, to match up with DrQ @100K? That would help complete the picture of the contribution of pretraining to the overall process. Did the authors think of using DynaMPC in conjunction with the DrQ model? Is it possible (I guess not because if the world model requirement which is would make such modification model-based), and could it bring a fraction of the proposed benefits? In Algorithm 1, it looks like the world model ϕ is listed as the input but never explicitly referenced in the algorithm body; could the authors clarify whether this is correct and whether the explicit reference is needed?
ICLR
Title Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues Abstract Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multiturn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues. 1 INTRODUCTION Traditional visual question answering (Antol et al., 2015; Jang et al., 2017) involves answering questions about a given image. Extending from this line of research, recently Das et al. (2017); Alamri et al. (2019) add another level of complexity by positioning each question and answer pair in a multi-turn or conversational setting (See Figure 1 for an example). This line of research has promising applications to improve virtual intelligent assistants in multi-modal scenarios (e.g. assistants for people with visual impairment). Most state-of-the-part approaches in this line of research (Kang et al., 2019; Schwartz et al., 2019b; Le et al., 2019) tackle the additional complexity in the multi-turn setting by learning to process dialogue context sequentially turn by turn. Despite the success of these approaches, they often fail to exploit the dependencies between dialogue turns of long distance, e.g. the 2nd and 5th turns in Figure 1. In long dialogues, this shortcoming becomes more obvious and necessitates an approach for learning long-distance dependencies between dialogue turns. To reason over dialogue context with long-distance dependencies, recent research in dialogues discovers graph-based structures at the turn level to predict the speaker’s emotion (Ghosal et al., 2019) or generate sequential questions semi-autoregressively (Chai & Wan, 2020). Recently Zheng et al. (2019) incorporate graph neural models to connect the textual cues between all pairs of dialogue turns. These methods, however, involve a fixed graphical structure of dialogue turns, in which only a small number of nodes contains lexical overlap with the question of the current turn, e.g. the 1st, 3rd, and 5th turns in Figure 1. These methods also fail to factor in the temporality of dialogue turns as the graph structures do not guarantee the sequential ordering among turns. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model learns a reasoning path that traverses through dialogue turns to propagate contextual cues that are densely related to the semantics of the current questions. Our approach balances between a sequential and graphical process to exploit dialogue information. Our work is related to the long-studied research domain of discourse structures, e.g. (Barzilay & Lapata, 2008; Feng & Hirst, 2011; Tan et al., 2016; Habernal & Gurevych, 2017). A form of discourse structure is argument structures, including premises and claims and their relations. Argument structures have been studied to assess different characteristics in text, such as coherence, persuasiveness, and susceptibility to attack. However, most efforts are designed for discourse study in monologues and much less attention is directed towards conversational data. In this work, we investigate a form of discourse structure through semantic graphs built upon the overlap of component representations among dialogue turns. We further enhance the models with a reasoning path learning model to learn the best information path for the next utterance generation. To learn a reasoning path, we incorporate our method with bridge entities, a concept often seen in reading comprehension research, and earlier used in entity-based discourse analysis (Barzilay & Lapata, 2008). In reading comprehension problems, bridge entities denote entities that are common between two knowledge bases e.g. Wikipedia paragraphs in HotpotQA (Yang et al., 2018b). In discourse analysis, entities and their locations in text are used to learn linguistic patterns that indicate certain qualities of a document. In our method, we first reconstruct each dialogue turn (including question and answer) into a set of component sub-nodes (e.g. entities, action phrases) using common syntactical dependency parsers. Each result dialogue turn contains sub-nodes that can be used as bridge entities. Our reasoning path learning approach contains 2 phases: (1) first, at each dialogue turn, a graph network is constructed at the turn level. Any two turns are connected if they have an overlapping sub-node or if two of their sub-nodes are semantically similar. (2) secondly, a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question. The predicted path is used as a skeleton layout to propagate visual features through each step of the path. Specifically, in PDC, we adopt non-parameterized approaches (e.g. cosine similarity) to construct the edges in graph networks and each sub-node is represented by pre-trained word embedding vectors. Our path generator is a transformer decoder that regressively generates the next turn index conditioned on the previously generated turn sequence. Our reasoning model is a combination of a vanilla graph convolutional network (Kipf & Welling, 2017) and transformer encoder (Vaswani et al., 2017). In each traversing step, we retrieve visual features conditioned by the corresponding dialogue turn and propagate the features to the next step. Finally, the propagated multimodal features are used as input to a transformer decoder to predict the answer. Our experimental results show that our method can improve the results on the Audio-Visual SceneAware Dialogues (AVSD) generation settings (Alamri et al., 2019), outperform previous state-ofthe-art methods. We evaluate our approach through comprehensive ablation analysis and qualitative study. PDC model also provides additional insights on how the inherent contextual cues in dialogue context are learned in neural networks in the form of a reasoning path. 2 RELATED WORK Discourses in monologues. Related to our work is the research of discourse structures. A longstudied line of research in this domain focuses on argument mining to identify the structure of argument, claims and premises, and relations between them (Feng & Hirst, 2011; Stab & Gurevych, 2014; Peldszus & Stede, 2015; Persing & Ng, 2016; Habernal & Gurevych, 2017). More recently, Ghosh et al. (2016); Duthie & Budzynska (2018); Jiang et al. (2019) propose to learn argument structures in student essays and official debates. In earlier approaches, Barzilay & Lapata (2008); Lin et al. (2011); Feng et al. (2014) study discourses to derive coherence assessment methods through entity-based representations of text. These approaches are proposed from linguistic theories surrounding entity patterns in discourses, i.e. how they are introduced and discussed (Grosz et al., 1995). Guinaudeau & Strube (2013); Putra & Tokunaga (2017) extend prior work with graphical structures in which sentence similarity is calculated based on semantic vectors representing those sentences. These lines of research show that studying discourse structures is useful in many tasks, such as document ranking and discrimination. However, most of these approaches are designed for monologues rather than dialogues. Discourses in dialogues. More related to our problem setting is discourse research on text in a multi-turn setting. Murakami & Raymond (2010); Boltužić & Šnajder (2014); Swanson et al. (2015); Tan et al. (2016); Niculae et al. (2017); Morio & Fujita (2018); Chakrabarty et al. (2019) introduce new corpus and different methods to mine arguments in online discussion forums. Their models are trained to extract claims and premises in each user post and identify their relations between argument components in each pair of user posts. More recently, Li et al. (2020a); Jo et al. (2020) extend argument mining in online threads to identify attackability and persuasiveness in online posts. In this work, we address the problem of video-grounded dialogue, in which dialogue turns are often semantically connected by a common grounding information source, a video. In this task, a discourse-based approach enables dialogue models to learn to anticipate the upcoming textual information in future dialogue turns. However, directly applying prior work on discourse or argument structures into video-grounded dialogues is not straightforward due to the inherent difference between online discussion posts and video-grounded dialogues. In video-grounded dialogues, the language is often closer to spoken language and there are fewer clear argument structures to be learned. Moreover, the presence of video necessitates the interaction between multiple modalities, text and vision. Incorporating traditional discourse structures to model cross-modality interaction is not straightforward. In this work, we propose to model dialogue context by using compositional graphical structures and constructing information traversal paths through dialogue turns. Graph-based dialogue models. Related to our work is research study that investigates different types of graph structures in dialogue. Hu et al. (2019); Shi & Huang (2019); Zhu et al. (2020) address the “reply_to” relationship among multi-party dialogues through graph networks that incorporate conversational flows in comment threads on social networks, e.g. Reddit and Ubuntu IRC, and online games. Zheng et al. (2019) propose a fully connected graph structure at the turn level for visual dialogues. Concurrently, Ghosal et al. (2019) also propose a fully connected graph structure with heterogeneous edges to detect the emotion of participating speakers. All of these methods discover graph structures connecting pairs of dialogue turns of little lexical overlap, resulting in sub-optimal feature propagation. This drawback becomes more significant in question answering problems in multi-turn settings. Our approach constructs graph networks based on compositional similarities. Reasoning path learning. Our method is also motivated by the recent research of machine reading comprehension, e.g. WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018a). De Cao et al. (2019); Qiu et al. (2019) construct graph networks of supporting documents with entity nodes that are connected based on different kinds of relationships. Tu et al. (2019); Tang et al. (2020) enhance these methods with additional edges connecting output candidates and documents. Extended from these methods are path-based approaches that learn to predict a reasoning path through supporting documents. Kundu et al. (2019); Asai et al. (2020) score and rank path candidates that connect entities in question to the target answer. A common strategy among these methods is the use of bridge entities. However, unlike reading comprehension, dialogues are normally not entity-centric and it is not trivial to directly adopt bridge entities into dialogue context. Cross-modality feature learning. Our work is related to study that integrates visual and linguistic information representation. A line of research in this domain is the problem of visual QA, e.g. (Minh Le et al., 2020; Gao et al., 2019). Closer to our method are methods that adopt compositionality in textual features. Specifically, Socher et al. (2014) introduce image and language representation learning by detecting the component lexical parts in sentences and combining them with image features. The main difference between these approaches and our work is the study of cross-modalities in a multi-turn setting. Our approach directly tackles the embedded sequential order in dialogue utterances and examines how cross-modality features are passed from turn to turn. 3 METHOD To describe our PDC model, we introduce a new graph-based method (Section 3.2) that constructs a graph structure to connect turn-level representations in dialogue context based on their compositional semantics. The compositional semantics consists of sub-nodes detected through syntactical dependency parsing methods. We enhance our approach with a path-based propagation method (Section 3.3) to narrow down the contextual information that facilitates question answering of the current turn. Our approach integrates a strong strategy to model dialogue flows in the form of graphical and path-based information such that contextual linguistic information is exploited to propagate relevant visual features (Section 3.4). Figure 2 demonstrates an overview of our method. 3.1 PROBLEM DEFINITION The inputs to a question answering problem in a multi-turn setting consist of a dialogue D and the visual input of a video I. Each dialogue contains a sequence of dialogue turns, each of which is a pair of question Q and answer A. At each dialogue turn t, we denote the dialogue context Ct as all previous dialogue turns Ct = {(Qi,Ai)}|i=t−1i=1 . Since it is positioned in a dialogue, the question of turn t Qt might be dependent on a subset of the dialogue context Ct. The output is the answer of the current turn Ât. Each textual component, i.e. Q and A, is represented as a sequence of token or word indices {wm}|m=Lm=1 ∈ |V|, where L is the sequence length and V is the vocabulary set. The objective of the task is the generation objective that output answers of the current turn: Ât = arg max At P (At|I, Ct,Qt;θ) = arg max At LA∏ m=1 Pm(wm|At,1:m−1, I, Ct,Qt;θ) (1) 3.2 COMPOSITIONAL SEMANTIC GRAPH OF DIALOGUE CONTEXT The semantic relations between dialogue turns are decomposed to semantic relations between subnodes that constitute each turn. These composition relations serve as strong clues to determine how a dialogue turn is related to another. We first employ a co-reference resolution system, e.g. (Clark & Manning, 2016), to replace pronouns with the original entities. We then explore using the Stanford parser system1 to discover sub-nodes. The parser decomposes each sentence into grammatical components, where a word and its modifier are connected in a tree structure. For each dialogue turn, we concatenate the question and answer of that turn as input to the parser. The output dependency tree is pruned to remove unimportant constituents and merge adjacent nodes to form a semantic unit. 1v3.9.2 retrieved at https://nlp.stanford.edu/software/lex-parser.shtml A graph structure G is then constructed. Any two turns are connected if one of their corresponding sub-nodes are semantically similar. To calculate the similarity score, we obtain their pre-trained word2vec embeddings2 and compute the cosine similarity score. Algorithm 1 provides the details of the procedure to automatically construct a semantic graph. Note that our approach can also be applied with other co-reference resolution systems, parser, or pre-trained embeddings. Unlike graph structures in machine reading comprehension such as Wikipedia graph, the semantic graph G is not fixed throughout the sample population but is constructed for each dialogue and at each turn. Algorithm 1: Compositional semantic graph of dialogue context Data: Dialogue context Ct, question of the current turn Qt Result: Semantic graph G = (V, E) 1 begin 2 T ←− ∅; G = {V, E}; E ←− ∅; V ←− ∅; S ←− ∅; 3 H ←− Coreference_Resolution([Ct;Qt]); 4 for each dialogue turn h ∈ H do 5 Th ←− Merge_Nodes(Prune_Tree(Dependency_Parse(h))); T ←− T ∪ {Th}; 6 V ←− V ∪ {h}; E ←− E ∪ {〈Turn_Position(h),Turn_Position(h)〉} 7 for each dependency tree T = (VT , ET ) ∈ T do S ←− S ∪ {VT } 8 for each sub-node si ∈ S do 9 for each sub-node sj ∈ S do 10 if not In_Same_Turn(si, sj) and Is_Similar(si, sj) then 11 E ←− E ∪ {〈Get_Dial_Turn(si),Get_Dial_Turn(sj)〉} 12 E ←− E ∪ {〈Get_Dial_Turn(sj),Get_Dial_Turn(si)〉} 13 return G 3.3 LEARNING TO GENERATE REASONING PATHS Our proposed compositional approach to construct a semantic graph in dialogue context ensures lexical overlaps with the question, but the graph structure does not guarantee the temporal order of dialogue turns. To ensure this sequential information is maintained, we train a generator to predict reasoning paths that traverse through current dialogue turn to past dialogue turns. We use a Transformer decoder to model the reasoning paths from the current turn t. The first position of the path, z0 is initialized with the turn-level position embedding of t. The next turn index is generated auto-regressively by conditioning on the previously generated path sequence: z0 = Embed(t) ∈ Rd (2) Z0:m−1 = Embed([t; r̂1, ..., r̂m−1]) (3) where r̂i denotes a predicted dialogue turn index. The dialogue context and question of the current turn are represented by embedding vectors of their component tokens. Following Vaswani et al. (2017), their representations are enhanced with the sine-cosine positional encoding PosEncode. Qt = Embed(Qt) + PosEncode(Qt) ∈ RLQt×d (4) Ct = Embed(Ct) + PosEncode(Ct) ∈ RLCt×d (5) Note that the dialogue context representation Ct is the embedding of dialogue turns up to the last turn t− 1, excluding answer embedding of the current turn At. We denote a Transformer attention block as Transformer(query, key, value). The path generator incorporates contextual information through attention layers on dialogue context and question. D (1) path = Transfromer(Z0:m−1, Z0:m−1, Z0:m−1) ∈ R m×d (6) D (2) path = Transfromer(D (1) path, Qt, Qt) ∈ R m×d (7) D (3) path = Transfromer(D (2) path, Ct, Ct) ∈ R m×d (8) 2https://code.google.com/archive/p/word2vec/ At the m-th decoding step (m ≥ 1), our model selects the next dialogue turn among the set of dialogue turns that are adjacent to one at (m − 1)-th decoding step in the semantic graph. This is enforced through masking the softmax output scores in which non-adjacent turn indices are assigned to a very low scalar smasked. We denote the adjacency matrix of semantic graph G = (V, E) as a square matrix A of size |V| × |V| where Ai,j = 1 if 〈i, j〉 ∈ E and Ai,i = 1∀i = 1, ..., |V|. The probability of decoded turns at the m-th decoding step is: Pm = softmax(D (3) path,mWpath) ∈ R |V |, Pm,i = smasked∀i|Ar̂m−1,i = 0 (9) where Wpath ∈ Rd×|V |. The decoding process is terminated when the next decoded token is an [EOP] (end-of-path) token. During inference time, we adopt a greedy decoding approach. Due to the small size of V , we found that a greedy approach can perform as well as beam search methods. The computational cost of generating reasoning paths in dialogue context is, thus, only dependent on the average path length, which is bounded by the maximum number of dialogue turns. Data Augmentation. We train our path generator in a supervision manner. At each dialogue turn t with a semantic graph G, we use a graph traversal method, e.g. BFS, to find all paths that start from the current turn to any past turn. We maintain the ground-truth paths with dialogue temporal order by keeping the dialogue turn index in path position m lower than the turn index in path position m− 1. We also narrow down ground-truth paths based on their total lexical overlaps with the expected output answers. Using the dialogue in Figure 1 as an example, using BFS results in three potential path candidates: 5→ 4, 5→ 2, and 5→ 4→ 2. We select 5→ 4→ 2 as the ground-truth path because it can cover the most sub-nodes in the expected answers. If two paths have the same number of lexical overlaps, we select one with a shorter length. If two paths are equivalent, we randomly sample one path following uniform distribution at each training step. Ground-truth reasoning paths are added with [EOP] token at the final position for termination condition. The objective to train the path generator is the generation objective of reasoning path at each dialogue turn: R̂t = arg max Rt P (Rt|Ct,Qt;φ) = arg max Rt Lpath∏ m=1 Pm(rm|Rt,1:m−1, Ct,Qt;φ) (10) 3.4 MULTIMODAL REASONING FROM REASONING PATHS The graph structure G and generated path R̂t are used as layout to propagate features of both textual and visual inputs. For each dialogue turn from V , we obtain the corresponding embeddings and apply mean pooling to get a vector representation. We denote the turn-level representations of V as V ∈ Rd×|V |. We use attention to retrieve the turn-dependent visual features from visual input. M = Transformer(V, I, I) ∈ Rd×|V | (11) where I is a two-dimensional feature representation of visual input I. We define a new multimodal graph based on semantic graph G: Gmm = (Vmm, Emm) where Vmm = M and edges 〈mi,mj〉 ∈ Emm∀i, j|〈i, j〉 ∈ E . We employ a vanilla graph convolution network (Kipf & Welling, 2017) to update turn-level multimodal representations through message passing along all edges. ek = 1 |Ωk| ∑ mj∈Ωk f(mk,mj), e = 1 |V | ∑ k ek, m̃k = g(mk, ek, e) (12) where Ωk is the set of adjacent nodes of mk and f(.) and g(.) are non-linear layers, e.g. MLP and their inputs are just simply concatenated. To propagate features along a reasoning path R̂t, we utilize the updated turn-level multimodal representations M̃ ∈ |V | and traverse the path sequentially through the representation of the corresponding turn index rm in each traversing step. Specifically, We obtain G = {m̃r̂0 , m̃r̂1 ...} ∈ RLpath×d. The traversing process can be done through a recurrent network or a transformer encoder. G̃ = Transformer(G,G,G) ∈ RLpath×d (13) To incorporate propagated features into the target response, we adopt a state-of-the-art decoder model from (Le et al., 2019) that exploits multimodal attention over contextual features. Specifically, We integrate both M̃ and G̃ at each response decoding step through two separate attention layers. Besides, we also experiment with integrating propagated features with decoder as Transformer language models. Transformer language models have shown impressive performance recently in generation tasks by transferring language representations pretrained in massive data (Radford et al., 2019). To integrate, we simply concatenate M̃ and G̃ to the input sequence embeddings as input to language models, similar as (Le & Hoi, 2020; Li et al., 2020b). Optimization. The multimodal reasoning model is learned jointly with other model components. All model parameters are optimized through the objectives from both Equation 1 and 10. We use the standard cross-entropy loss which calculates the logarithm of each softmax score at each decoding position of Ât and R̂t. 4 EXPERIMENTS Dataset. We use the Audio-Visual Sene-Aware Dialogue (AVSD) benchmark developed by Alamri et al. (2019). The benchmark focuses on dialogues grounded on videos from the Charades dataset (Sigurdsson et al., 2016). Each dialogue can have up to 10 dialogue turns, which makes it an appropriate choice to evaluate our approach of reasoning paths over dialogue context. We used the standard visual features I3D to represent the video input. We experimented with the test splits used in the 7th Dialogue System Technology Challenge (DSTC7) (Yoshino et al., 2019) and DSTC8 (Kim et al., 2019). Please see the Appendix A for our experimental setups. Overall Results. The dialogues in the AVSD benchmark focuses on question answering over multiple turns and entail less semantic variance than open-domain dialogues. Therefore, we report the objective scores, including BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015), which are found to have strong correlation with human subjective scores (Alamri et al., 2019). In Table 1 and 3, we present the test results of our models in comparison with previous models in DSTC7 and DSTC8 respectively. In both test splits, our models achieve very strong performance against models without using pre-trained language models. Comparing with models using pre-trained models and additional fine-tuning, our models achieve competitive performances in both test splits. The performance gain of our models when using GPT2 indicates current model sensitivity to language modelling as a generator. A unique benefit of our models from prior approaches is the insights of how the models exploit information from dialogue turns in the form of reasoning paths (Please see example outputs in Figure 3). Ablation Analysis. In Table 4 we report the results of path learning in a global semantic graph. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. In this case, to train the path generator, we obtain the ground-truth path by using BFS to traverse to the node with the most sentence-level similarity score to the expected answer. We observe that: (1) models that learn paths based on component lexical overlaps results in better performance than paths based on global lexical overlaps in most of the objective metrics. (2) Propagation by reasoning path alone without using GCN does not result in better performance. This can be explained as the information in each traversal step is not independent but still contains semantic dependencies to other turns. It is different from standard reading comprehension problems where each knowledge base is independent and it is not required to propagate features through a graph structure to obtain contextual updates. Please see the Appendix B for additional analysis of Table 4. Impacts of Reasoning Path Learning. We compare models that can learn reasoning paths against those that use a fixed propagation path through the past dialogue turns. From Table 5, we observe that: (1) learning dynamic instance-based reasoning paths outperforms all models that propagate through a default path. This is achieved by using the reasoning path as a skeleton for feature propagation as well as adopting the joint training strategy. We can consider dynamically learned paths as an ideal traversal path to propagate visual cues among all possible paths within the semantic graph of the dialogue context. (2) our path generator can generate reasoning paths well and the model with learned paths can perform as well as one using the oracle paths. (3) due to the short length of reasoning paths (limited by the maximum dialogue length), either beam search or greedy decoding approach is good enough to generate paths. The greedy approach has the advantage of much lower computational cost. Qualitative Analysis. In Figure 3, we demonstrate some examples of our predicted responses and the corresponding reasoning paths. Specifically, we showcase samples in which the reasoning paths are 2-hops (Example A and B) and 3-hops (Example C and D), and the distance in each hop can be over one dialogue turn (Example B and D) or more (Example A and C). The example reasoning paths show to be able to connect a sequence of dialogue turns that are most relevant to questions of the current turn. For instance, in Example A, the reasoning path can connect the 7th and 9th turn to the current turn as they contain lexical overlaps, i.e. “the bag”, and “the cushion”. The path skips the 8th turn which is not relevant to the current question. Likewise, in Example C, the path skips the 4− 8th turns. All examples show that dialogue context can be used to extract additional visual clues relevant to the current turn. Information from dialogues, thus, deserves more attention than just being used as a background text input. Please see the Appendix C for additional analysis. 5 CONCLUSION We proposed PDC, a novel approach to learning a reasoning path over dialogue turns for videogrounded dialogues. Our approach exploits the compositional semantics in each dialogue turn to construct a semantic graph, which is then used to derive an optimal path for feature propagation. Our experiments demonstrate that our model can learn to retrieve paths that are most relevant to the current question. We hope our approach can motivate further study to investigate reasoning over multiple turns, especially in complex settings with interconnected dialogue flows (Sun et al., 2019). ACKNOWLEDGMENTS We thank all reviewers for their insightful feedback on the manuscript of this paper. The first author of this paper is supported by the Agency for Science, Technology and Research (A*STAR) Computing and Information Science scholarship. A EXPERIMENTAL SETUP We experiment with the Adam optimizer (Kingma & Ba, 2015). The models are trained with a warm-up learning rate period of 5 epochs before the learning rate decays and the training finishes up to 50 epochs. The best model is selected by the average loss in the validation set. All model parameters, except the decoder parameters when using pre-trained language models, are initialized with uniform distribution (Glorot & Bengio, 2010). The Transformer hyper-parameters are fine-tuned by validation results over d = {128, 256}, h = {1, 2, 4, 8, 16}, and a dropout rate from 0.1 to 0.5. Label smoothing (Szegedy et al., 2016) is applied on labels of Ât (label smoothing does not help when optimizing over R̂t as the labels are limited by the maximum length of dialogues, i.e. 10 in AVSD). B IMPACTS OF COMPOSITIONAL SEMANTIC GRAPH We experiment with model variants based on different types of graph structures. Specifically, we compare our compositional semantic graph against a graph built upon the turn-level global semantics. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. We also experiment with a fully connected graph structure. In each graph structure, we experiment with temporally ordered edges (TODirect). This is enforce by adding a check whether Get_Dial_Turn(sj) > Get_Dial_Turn(si) in line 11 and removing line 12 in Algorithm 1. From the results in Table 4, we observe that: (1) based on the CIDEr metric, the best performing graph structure is the compositional semantic graph while the global semantic graph and fully connected graph structure are almost equivalent. This is consistent with the previous insight in machine reading comprehension research that entity lexical overlaps between knowledge bases are often overlooked by global embeddings (Ding et al., 2019) and it is not reliable to construct a knowledge graph based on global representations alone. (2) regarding the direction of edges, bidirectional edges and temporally ordered edges perform similarly, indicating that processing dialogue turns following temporal orders provides enough information and backward processing is only supplementary. C ADDITIONAL QUALITATIVE ANALYSIS In Figure 4, we demonstrate examples outputs of reasoning paths and dialogue responses and have the following observations: • For questions that do not involve actions and can be answered by a single frame, there is typically no reasoning path, i.e. the path only includes the current turn (Example A and B). These questions are usually simple and they are rarely involved in multiple dialogue turns. • In many cases, the dialogue agent can predict an appropriate path but still not generate the correct answers (Example D and G). These paths are able to connect turns that are most relevant to the current turns but these past turns do not contain or contain very limited clues to the expected answers. For example, in Example F, the 2nd and 4th turn are linked by the lexical component for “the woman”. However, they do not have useful information relevant to the current turn, i.e. her clothes. • Finally, our approach shows that the current benchmark, AVSD, typically contains one-hop (Example C, D, E) to two-hop (Example F, G, H) reasoning paths over dialogue context. We hope future dialogue benchmarks will factor in the complexity of dialogue context in terms of reasoning hops to facilitate better research of intelligent dialogue systems. Discussion of failure cases. From the above observations, we identify the following scenarios that our models are susceptible to and propose potential directions for improvement. • Long complex utterances. One limitation of our methods is its dependence on syntactical parser methods to decompose a sentence into sub-nodes. In most dialogues, this problem is not too serious due to the short length of utterances, usually just a single sentence. However, in cases that the utterance contains multiple sentences/clauses or exhibits usage of spoken language with loose linguistic syntax, the parser may fail to decompose it properly. For instance, in Example G in Figure 4, the ground-truth answer contains a causality-based clause (“because”), making it harder to identify sub-nodes such as “sneeze” or “dusty”. • Contextualized semantic similarity. Another area we can improve upon this method is to inject some forms of sentence-level contextual cues into each sub-node to improve their semantic representations. For instance, in a hypothetical dialogue that involves 2 question utterances such as the 2nd turn in Example A and the 6th turn in Example E in Figure 4, our method might not detect the connection between these two as they do not have overlap component sub-nodes. However, they are both related to the audio aspect of the video and a reasoning path between these two turns is appropriate. D STATISTICS OF LOCAL VS. GLOBAL SEMANTIC GRAPHS In Table 6, we report the statistics of graph structures constructed by local and global semantics in all data splits of the AVSD benchmark. We observe that constructing graphs with local semantics result in a lower number of instances with no reasoning paths than making graphs with global semantics. This is due to compositionality in our method, resulting in higher lexical overlap between dialogue turns. With our method, the number of sub-nodes per dialogue turn is more than 4 on average, making it easier to connect dialogue turns. This also leads to a larger and more diverse set of reasoning paths for supervision learning. In local semantic graphs, the average number of reasoning paths per dialogue turn is 2 to 3 on average, higher than this number in global semantic graphs. Although our method requires additional computational effort to constructing these graphs, it is scalable to the size of the dialogue, i.e. number of the dialogue turns. To efficiently construct these graphs in a dialogue, the semantic graph of a dialogue turn can be built on top of the semantic graph of the last turn. This is done by simply adding the new sub-nodes to the last turn’s semantic graph and defining new edges adjacent to these sub-nodes only. In this way, the complexity of our graph construction method is linear to the number of dialogue turns.
1. What is the focus and contribution of the paper regarding semantic graphs for dialogue contexts? 2. What are the strengths of the proposed approach, particularly in its novelty and performance improvements? 3. What are the weaknesses of the paper, especially regarding its ablation studies and scalability concerns? 4. Do you have any questions or suggestions regarding the applicability of the method to other problems or datasets? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper proposes creating a semantic graph connecting the multiple turns in a dialogue and subsequently learning reasoning paths in that graph to find the most relevant nodes for answering a given question in a dialogue context. Strengths: This paper proposes to learn non-linear information flows in a sequential data which is a well-motivated problem. The proposed method is novel. Training a transformer decoder to learn reasoning paths and using BFS supervision to find the ground-truth paths is very interesting (however empirical support that this is effective is lacking, see Weaknesses below). The proposed approach significantly and consistently outperforms existing benchmarks on the AVSD dataset. The illustration in Fig 3 demonstrates the benefit of using the selected nodes from the graph. Weaknesses: From the results in Table 1, 3, it is clear that multimodal reasoning over "relevant" nodes in the semantic graph (termed as reasoning path in the paper) helps the model by reducing the noise. However, whether there is any benefit to learning that path is unclear from the ablation study present in Table 5 where there is no noticeable difference in the results between learned v/s fixed paths. In fact, "Path through last 7 turns" performs comparable to the "Learned Path" which raises question about the usefulness of transformer decoder based path learner. Similarly results in Table 4 seem within noise error from each other, hence making the arguments "component lexical overlap is better than global lexical overlap" weak. It is unclear if this approach of creating the graph (by finding pairwise semantic similarity) is scalable to real world datasets containing several turns with potentially large number of tokens. There is no analysis of information redundancy among the nodes of the graph which can help prune the graph. The results are only presented on one dataset. The transformer decoder learning seems to be tuned to this particular dataset ("greedy approach works better due to small size of V"). This questions the generality of the proposed approach. Questions / Suggestions It is unclear what (if) makes this method specific to video-grounded Q/A. Can this methodology be applied to other problems involving dialogues? Can this be applied to any problem containing time series data? There needs to be some discussion as to why RLM performs better/comparable (Table 1) when using pretraining.
ICLR
Title Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues Abstract Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multiturn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues. 1 INTRODUCTION Traditional visual question answering (Antol et al., 2015; Jang et al., 2017) involves answering questions about a given image. Extending from this line of research, recently Das et al. (2017); Alamri et al. (2019) add another level of complexity by positioning each question and answer pair in a multi-turn or conversational setting (See Figure 1 for an example). This line of research has promising applications to improve virtual intelligent assistants in multi-modal scenarios (e.g. assistants for people with visual impairment). Most state-of-the-part approaches in this line of research (Kang et al., 2019; Schwartz et al., 2019b; Le et al., 2019) tackle the additional complexity in the multi-turn setting by learning to process dialogue context sequentially turn by turn. Despite the success of these approaches, they often fail to exploit the dependencies between dialogue turns of long distance, e.g. the 2nd and 5th turns in Figure 1. In long dialogues, this shortcoming becomes more obvious and necessitates an approach for learning long-distance dependencies between dialogue turns. To reason over dialogue context with long-distance dependencies, recent research in dialogues discovers graph-based structures at the turn level to predict the speaker’s emotion (Ghosal et al., 2019) or generate sequential questions semi-autoregressively (Chai & Wan, 2020). Recently Zheng et al. (2019) incorporate graph neural models to connect the textual cues between all pairs of dialogue turns. These methods, however, involve a fixed graphical structure of dialogue turns, in which only a small number of nodes contains lexical overlap with the question of the current turn, e.g. the 1st, 3rd, and 5th turns in Figure 1. These methods also fail to factor in the temporality of dialogue turns as the graph structures do not guarantee the sequential ordering among turns. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model learns a reasoning path that traverses through dialogue turns to propagate contextual cues that are densely related to the semantics of the current questions. Our approach balances between a sequential and graphical process to exploit dialogue information. Our work is related to the long-studied research domain of discourse structures, e.g. (Barzilay & Lapata, 2008; Feng & Hirst, 2011; Tan et al., 2016; Habernal & Gurevych, 2017). A form of discourse structure is argument structures, including premises and claims and their relations. Argument structures have been studied to assess different characteristics in text, such as coherence, persuasiveness, and susceptibility to attack. However, most efforts are designed for discourse study in monologues and much less attention is directed towards conversational data. In this work, we investigate a form of discourse structure through semantic graphs built upon the overlap of component representations among dialogue turns. We further enhance the models with a reasoning path learning model to learn the best information path for the next utterance generation. To learn a reasoning path, we incorporate our method with bridge entities, a concept often seen in reading comprehension research, and earlier used in entity-based discourse analysis (Barzilay & Lapata, 2008). In reading comprehension problems, bridge entities denote entities that are common between two knowledge bases e.g. Wikipedia paragraphs in HotpotQA (Yang et al., 2018b). In discourse analysis, entities and their locations in text are used to learn linguistic patterns that indicate certain qualities of a document. In our method, we first reconstruct each dialogue turn (including question and answer) into a set of component sub-nodes (e.g. entities, action phrases) using common syntactical dependency parsers. Each result dialogue turn contains sub-nodes that can be used as bridge entities. Our reasoning path learning approach contains 2 phases: (1) first, at each dialogue turn, a graph network is constructed at the turn level. Any two turns are connected if they have an overlapping sub-node or if two of their sub-nodes are semantically similar. (2) secondly, a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question. The predicted path is used as a skeleton layout to propagate visual features through each step of the path. Specifically, in PDC, we adopt non-parameterized approaches (e.g. cosine similarity) to construct the edges in graph networks and each sub-node is represented by pre-trained word embedding vectors. Our path generator is a transformer decoder that regressively generates the next turn index conditioned on the previously generated turn sequence. Our reasoning model is a combination of a vanilla graph convolutional network (Kipf & Welling, 2017) and transformer encoder (Vaswani et al., 2017). In each traversing step, we retrieve visual features conditioned by the corresponding dialogue turn and propagate the features to the next step. Finally, the propagated multimodal features are used as input to a transformer decoder to predict the answer. Our experimental results show that our method can improve the results on the Audio-Visual SceneAware Dialogues (AVSD) generation settings (Alamri et al., 2019), outperform previous state-ofthe-art methods. We evaluate our approach through comprehensive ablation analysis and qualitative study. PDC model also provides additional insights on how the inherent contextual cues in dialogue context are learned in neural networks in the form of a reasoning path. 2 RELATED WORK Discourses in monologues. Related to our work is the research of discourse structures. A longstudied line of research in this domain focuses on argument mining to identify the structure of argument, claims and premises, and relations between them (Feng & Hirst, 2011; Stab & Gurevych, 2014; Peldszus & Stede, 2015; Persing & Ng, 2016; Habernal & Gurevych, 2017). More recently, Ghosh et al. (2016); Duthie & Budzynska (2018); Jiang et al. (2019) propose to learn argument structures in student essays and official debates. In earlier approaches, Barzilay & Lapata (2008); Lin et al. (2011); Feng et al. (2014) study discourses to derive coherence assessment methods through entity-based representations of text. These approaches are proposed from linguistic theories surrounding entity patterns in discourses, i.e. how they are introduced and discussed (Grosz et al., 1995). Guinaudeau & Strube (2013); Putra & Tokunaga (2017) extend prior work with graphical structures in which sentence similarity is calculated based on semantic vectors representing those sentences. These lines of research show that studying discourse structures is useful in many tasks, such as document ranking and discrimination. However, most of these approaches are designed for monologues rather than dialogues. Discourses in dialogues. More related to our problem setting is discourse research on text in a multi-turn setting. Murakami & Raymond (2010); Boltužić & Šnajder (2014); Swanson et al. (2015); Tan et al. (2016); Niculae et al. (2017); Morio & Fujita (2018); Chakrabarty et al. (2019) introduce new corpus and different methods to mine arguments in online discussion forums. Their models are trained to extract claims and premises in each user post and identify their relations between argument components in each pair of user posts. More recently, Li et al. (2020a); Jo et al. (2020) extend argument mining in online threads to identify attackability and persuasiveness in online posts. In this work, we address the problem of video-grounded dialogue, in which dialogue turns are often semantically connected by a common grounding information source, a video. In this task, a discourse-based approach enables dialogue models to learn to anticipate the upcoming textual information in future dialogue turns. However, directly applying prior work on discourse or argument structures into video-grounded dialogues is not straightforward due to the inherent difference between online discussion posts and video-grounded dialogues. In video-grounded dialogues, the language is often closer to spoken language and there are fewer clear argument structures to be learned. Moreover, the presence of video necessitates the interaction between multiple modalities, text and vision. Incorporating traditional discourse structures to model cross-modality interaction is not straightforward. In this work, we propose to model dialogue context by using compositional graphical structures and constructing information traversal paths through dialogue turns. Graph-based dialogue models. Related to our work is research study that investigates different types of graph structures in dialogue. Hu et al. (2019); Shi & Huang (2019); Zhu et al. (2020) address the “reply_to” relationship among multi-party dialogues through graph networks that incorporate conversational flows in comment threads on social networks, e.g. Reddit and Ubuntu IRC, and online games. Zheng et al. (2019) propose a fully connected graph structure at the turn level for visual dialogues. Concurrently, Ghosal et al. (2019) also propose a fully connected graph structure with heterogeneous edges to detect the emotion of participating speakers. All of these methods discover graph structures connecting pairs of dialogue turns of little lexical overlap, resulting in sub-optimal feature propagation. This drawback becomes more significant in question answering problems in multi-turn settings. Our approach constructs graph networks based on compositional similarities. Reasoning path learning. Our method is also motivated by the recent research of machine reading comprehension, e.g. WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018a). De Cao et al. (2019); Qiu et al. (2019) construct graph networks of supporting documents with entity nodes that are connected based on different kinds of relationships. Tu et al. (2019); Tang et al. (2020) enhance these methods with additional edges connecting output candidates and documents. Extended from these methods are path-based approaches that learn to predict a reasoning path through supporting documents. Kundu et al. (2019); Asai et al. (2020) score and rank path candidates that connect entities in question to the target answer. A common strategy among these methods is the use of bridge entities. However, unlike reading comprehension, dialogues are normally not entity-centric and it is not trivial to directly adopt bridge entities into dialogue context. Cross-modality feature learning. Our work is related to study that integrates visual and linguistic information representation. A line of research in this domain is the problem of visual QA, e.g. (Minh Le et al., 2020; Gao et al., 2019). Closer to our method are methods that adopt compositionality in textual features. Specifically, Socher et al. (2014) introduce image and language representation learning by detecting the component lexical parts in sentences and combining them with image features. The main difference between these approaches and our work is the study of cross-modalities in a multi-turn setting. Our approach directly tackles the embedded sequential order in dialogue utterances and examines how cross-modality features are passed from turn to turn. 3 METHOD To describe our PDC model, we introduce a new graph-based method (Section 3.2) that constructs a graph structure to connect turn-level representations in dialogue context based on their compositional semantics. The compositional semantics consists of sub-nodes detected through syntactical dependency parsing methods. We enhance our approach with a path-based propagation method (Section 3.3) to narrow down the contextual information that facilitates question answering of the current turn. Our approach integrates a strong strategy to model dialogue flows in the form of graphical and path-based information such that contextual linguistic information is exploited to propagate relevant visual features (Section 3.4). Figure 2 demonstrates an overview of our method. 3.1 PROBLEM DEFINITION The inputs to a question answering problem in a multi-turn setting consist of a dialogue D and the visual input of a video I. Each dialogue contains a sequence of dialogue turns, each of which is a pair of question Q and answer A. At each dialogue turn t, we denote the dialogue context Ct as all previous dialogue turns Ct = {(Qi,Ai)}|i=t−1i=1 . Since it is positioned in a dialogue, the question of turn t Qt might be dependent on a subset of the dialogue context Ct. The output is the answer of the current turn Ât. Each textual component, i.e. Q and A, is represented as a sequence of token or word indices {wm}|m=Lm=1 ∈ |V|, where L is the sequence length and V is the vocabulary set. The objective of the task is the generation objective that output answers of the current turn: Ât = arg max At P (At|I, Ct,Qt;θ) = arg max At LA∏ m=1 Pm(wm|At,1:m−1, I, Ct,Qt;θ) (1) 3.2 COMPOSITIONAL SEMANTIC GRAPH OF DIALOGUE CONTEXT The semantic relations between dialogue turns are decomposed to semantic relations between subnodes that constitute each turn. These composition relations serve as strong clues to determine how a dialogue turn is related to another. We first employ a co-reference resolution system, e.g. (Clark & Manning, 2016), to replace pronouns with the original entities. We then explore using the Stanford parser system1 to discover sub-nodes. The parser decomposes each sentence into grammatical components, where a word and its modifier are connected in a tree structure. For each dialogue turn, we concatenate the question and answer of that turn as input to the parser. The output dependency tree is pruned to remove unimportant constituents and merge adjacent nodes to form a semantic unit. 1v3.9.2 retrieved at https://nlp.stanford.edu/software/lex-parser.shtml A graph structure G is then constructed. Any two turns are connected if one of their corresponding sub-nodes are semantically similar. To calculate the similarity score, we obtain their pre-trained word2vec embeddings2 and compute the cosine similarity score. Algorithm 1 provides the details of the procedure to automatically construct a semantic graph. Note that our approach can also be applied with other co-reference resolution systems, parser, or pre-trained embeddings. Unlike graph structures in machine reading comprehension such as Wikipedia graph, the semantic graph G is not fixed throughout the sample population but is constructed for each dialogue and at each turn. Algorithm 1: Compositional semantic graph of dialogue context Data: Dialogue context Ct, question of the current turn Qt Result: Semantic graph G = (V, E) 1 begin 2 T ←− ∅; G = {V, E}; E ←− ∅; V ←− ∅; S ←− ∅; 3 H ←− Coreference_Resolution([Ct;Qt]); 4 for each dialogue turn h ∈ H do 5 Th ←− Merge_Nodes(Prune_Tree(Dependency_Parse(h))); T ←− T ∪ {Th}; 6 V ←− V ∪ {h}; E ←− E ∪ {〈Turn_Position(h),Turn_Position(h)〉} 7 for each dependency tree T = (VT , ET ) ∈ T do S ←− S ∪ {VT } 8 for each sub-node si ∈ S do 9 for each sub-node sj ∈ S do 10 if not In_Same_Turn(si, sj) and Is_Similar(si, sj) then 11 E ←− E ∪ {〈Get_Dial_Turn(si),Get_Dial_Turn(sj)〉} 12 E ←− E ∪ {〈Get_Dial_Turn(sj),Get_Dial_Turn(si)〉} 13 return G 3.3 LEARNING TO GENERATE REASONING PATHS Our proposed compositional approach to construct a semantic graph in dialogue context ensures lexical overlaps with the question, but the graph structure does not guarantee the temporal order of dialogue turns. To ensure this sequential information is maintained, we train a generator to predict reasoning paths that traverse through current dialogue turn to past dialogue turns. We use a Transformer decoder to model the reasoning paths from the current turn t. The first position of the path, z0 is initialized with the turn-level position embedding of t. The next turn index is generated auto-regressively by conditioning on the previously generated path sequence: z0 = Embed(t) ∈ Rd (2) Z0:m−1 = Embed([t; r̂1, ..., r̂m−1]) (3) where r̂i denotes a predicted dialogue turn index. The dialogue context and question of the current turn are represented by embedding vectors of their component tokens. Following Vaswani et al. (2017), their representations are enhanced with the sine-cosine positional encoding PosEncode. Qt = Embed(Qt) + PosEncode(Qt) ∈ RLQt×d (4) Ct = Embed(Ct) + PosEncode(Ct) ∈ RLCt×d (5) Note that the dialogue context representation Ct is the embedding of dialogue turns up to the last turn t− 1, excluding answer embedding of the current turn At. We denote a Transformer attention block as Transformer(query, key, value). The path generator incorporates contextual information through attention layers on dialogue context and question. D (1) path = Transfromer(Z0:m−1, Z0:m−1, Z0:m−1) ∈ R m×d (6) D (2) path = Transfromer(D (1) path, Qt, Qt) ∈ R m×d (7) D (3) path = Transfromer(D (2) path, Ct, Ct) ∈ R m×d (8) 2https://code.google.com/archive/p/word2vec/ At the m-th decoding step (m ≥ 1), our model selects the next dialogue turn among the set of dialogue turns that are adjacent to one at (m − 1)-th decoding step in the semantic graph. This is enforced through masking the softmax output scores in which non-adjacent turn indices are assigned to a very low scalar smasked. We denote the adjacency matrix of semantic graph G = (V, E) as a square matrix A of size |V| × |V| where Ai,j = 1 if 〈i, j〉 ∈ E and Ai,i = 1∀i = 1, ..., |V|. The probability of decoded turns at the m-th decoding step is: Pm = softmax(D (3) path,mWpath) ∈ R |V |, Pm,i = smasked∀i|Ar̂m−1,i = 0 (9) where Wpath ∈ Rd×|V |. The decoding process is terminated when the next decoded token is an [EOP] (end-of-path) token. During inference time, we adopt a greedy decoding approach. Due to the small size of V , we found that a greedy approach can perform as well as beam search methods. The computational cost of generating reasoning paths in dialogue context is, thus, only dependent on the average path length, which is bounded by the maximum number of dialogue turns. Data Augmentation. We train our path generator in a supervision manner. At each dialogue turn t with a semantic graph G, we use a graph traversal method, e.g. BFS, to find all paths that start from the current turn to any past turn. We maintain the ground-truth paths with dialogue temporal order by keeping the dialogue turn index in path position m lower than the turn index in path position m− 1. We also narrow down ground-truth paths based on their total lexical overlaps with the expected output answers. Using the dialogue in Figure 1 as an example, using BFS results in three potential path candidates: 5→ 4, 5→ 2, and 5→ 4→ 2. We select 5→ 4→ 2 as the ground-truth path because it can cover the most sub-nodes in the expected answers. If two paths have the same number of lexical overlaps, we select one with a shorter length. If two paths are equivalent, we randomly sample one path following uniform distribution at each training step. Ground-truth reasoning paths are added with [EOP] token at the final position for termination condition. The objective to train the path generator is the generation objective of reasoning path at each dialogue turn: R̂t = arg max Rt P (Rt|Ct,Qt;φ) = arg max Rt Lpath∏ m=1 Pm(rm|Rt,1:m−1, Ct,Qt;φ) (10) 3.4 MULTIMODAL REASONING FROM REASONING PATHS The graph structure G and generated path R̂t are used as layout to propagate features of both textual and visual inputs. For each dialogue turn from V , we obtain the corresponding embeddings and apply mean pooling to get a vector representation. We denote the turn-level representations of V as V ∈ Rd×|V |. We use attention to retrieve the turn-dependent visual features from visual input. M = Transformer(V, I, I) ∈ Rd×|V | (11) where I is a two-dimensional feature representation of visual input I. We define a new multimodal graph based on semantic graph G: Gmm = (Vmm, Emm) where Vmm = M and edges 〈mi,mj〉 ∈ Emm∀i, j|〈i, j〉 ∈ E . We employ a vanilla graph convolution network (Kipf & Welling, 2017) to update turn-level multimodal representations through message passing along all edges. ek = 1 |Ωk| ∑ mj∈Ωk f(mk,mj), e = 1 |V | ∑ k ek, m̃k = g(mk, ek, e) (12) where Ωk is the set of adjacent nodes of mk and f(.) and g(.) are non-linear layers, e.g. MLP and their inputs are just simply concatenated. To propagate features along a reasoning path R̂t, we utilize the updated turn-level multimodal representations M̃ ∈ |V | and traverse the path sequentially through the representation of the corresponding turn index rm in each traversing step. Specifically, We obtain G = {m̃r̂0 , m̃r̂1 ...} ∈ RLpath×d. The traversing process can be done through a recurrent network or a transformer encoder. G̃ = Transformer(G,G,G) ∈ RLpath×d (13) To incorporate propagated features into the target response, we adopt a state-of-the-art decoder model from (Le et al., 2019) that exploits multimodal attention over contextual features. Specifically, We integrate both M̃ and G̃ at each response decoding step through two separate attention layers. Besides, we also experiment with integrating propagated features with decoder as Transformer language models. Transformer language models have shown impressive performance recently in generation tasks by transferring language representations pretrained in massive data (Radford et al., 2019). To integrate, we simply concatenate M̃ and G̃ to the input sequence embeddings as input to language models, similar as (Le & Hoi, 2020; Li et al., 2020b). Optimization. The multimodal reasoning model is learned jointly with other model components. All model parameters are optimized through the objectives from both Equation 1 and 10. We use the standard cross-entropy loss which calculates the logarithm of each softmax score at each decoding position of Ât and R̂t. 4 EXPERIMENTS Dataset. We use the Audio-Visual Sene-Aware Dialogue (AVSD) benchmark developed by Alamri et al. (2019). The benchmark focuses on dialogues grounded on videos from the Charades dataset (Sigurdsson et al., 2016). Each dialogue can have up to 10 dialogue turns, which makes it an appropriate choice to evaluate our approach of reasoning paths over dialogue context. We used the standard visual features I3D to represent the video input. We experimented with the test splits used in the 7th Dialogue System Technology Challenge (DSTC7) (Yoshino et al., 2019) and DSTC8 (Kim et al., 2019). Please see the Appendix A for our experimental setups. Overall Results. The dialogues in the AVSD benchmark focuses on question answering over multiple turns and entail less semantic variance than open-domain dialogues. Therefore, we report the objective scores, including BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015), which are found to have strong correlation with human subjective scores (Alamri et al., 2019). In Table 1 and 3, we present the test results of our models in comparison with previous models in DSTC7 and DSTC8 respectively. In both test splits, our models achieve very strong performance against models without using pre-trained language models. Comparing with models using pre-trained models and additional fine-tuning, our models achieve competitive performances in both test splits. The performance gain of our models when using GPT2 indicates current model sensitivity to language modelling as a generator. A unique benefit of our models from prior approaches is the insights of how the models exploit information from dialogue turns in the form of reasoning paths (Please see example outputs in Figure 3). Ablation Analysis. In Table 4 we report the results of path learning in a global semantic graph. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. In this case, to train the path generator, we obtain the ground-truth path by using BFS to traverse to the node with the most sentence-level similarity score to the expected answer. We observe that: (1) models that learn paths based on component lexical overlaps results in better performance than paths based on global lexical overlaps in most of the objective metrics. (2) Propagation by reasoning path alone without using GCN does not result in better performance. This can be explained as the information in each traversal step is not independent but still contains semantic dependencies to other turns. It is different from standard reading comprehension problems where each knowledge base is independent and it is not required to propagate features through a graph structure to obtain contextual updates. Please see the Appendix B for additional analysis of Table 4. Impacts of Reasoning Path Learning. We compare models that can learn reasoning paths against those that use a fixed propagation path through the past dialogue turns. From Table 5, we observe that: (1) learning dynamic instance-based reasoning paths outperforms all models that propagate through a default path. This is achieved by using the reasoning path as a skeleton for feature propagation as well as adopting the joint training strategy. We can consider dynamically learned paths as an ideal traversal path to propagate visual cues among all possible paths within the semantic graph of the dialogue context. (2) our path generator can generate reasoning paths well and the model with learned paths can perform as well as one using the oracle paths. (3) due to the short length of reasoning paths (limited by the maximum dialogue length), either beam search or greedy decoding approach is good enough to generate paths. The greedy approach has the advantage of much lower computational cost. Qualitative Analysis. In Figure 3, we demonstrate some examples of our predicted responses and the corresponding reasoning paths. Specifically, we showcase samples in which the reasoning paths are 2-hops (Example A and B) and 3-hops (Example C and D), and the distance in each hop can be over one dialogue turn (Example B and D) or more (Example A and C). The example reasoning paths show to be able to connect a sequence of dialogue turns that are most relevant to questions of the current turn. For instance, in Example A, the reasoning path can connect the 7th and 9th turn to the current turn as they contain lexical overlaps, i.e. “the bag”, and “the cushion”. The path skips the 8th turn which is not relevant to the current question. Likewise, in Example C, the path skips the 4− 8th turns. All examples show that dialogue context can be used to extract additional visual clues relevant to the current turn. Information from dialogues, thus, deserves more attention than just being used as a background text input. Please see the Appendix C for additional analysis. 5 CONCLUSION We proposed PDC, a novel approach to learning a reasoning path over dialogue turns for videogrounded dialogues. Our approach exploits the compositional semantics in each dialogue turn to construct a semantic graph, which is then used to derive an optimal path for feature propagation. Our experiments demonstrate that our model can learn to retrieve paths that are most relevant to the current question. We hope our approach can motivate further study to investigate reasoning over multiple turns, especially in complex settings with interconnected dialogue flows (Sun et al., 2019). ACKNOWLEDGMENTS We thank all reviewers for their insightful feedback on the manuscript of this paper. The first author of this paper is supported by the Agency for Science, Technology and Research (A*STAR) Computing and Information Science scholarship. A EXPERIMENTAL SETUP We experiment with the Adam optimizer (Kingma & Ba, 2015). The models are trained with a warm-up learning rate period of 5 epochs before the learning rate decays and the training finishes up to 50 epochs. The best model is selected by the average loss in the validation set. All model parameters, except the decoder parameters when using pre-trained language models, are initialized with uniform distribution (Glorot & Bengio, 2010). The Transformer hyper-parameters are fine-tuned by validation results over d = {128, 256}, h = {1, 2, 4, 8, 16}, and a dropout rate from 0.1 to 0.5. Label smoothing (Szegedy et al., 2016) is applied on labels of Ât (label smoothing does not help when optimizing over R̂t as the labels are limited by the maximum length of dialogues, i.e. 10 in AVSD). B IMPACTS OF COMPOSITIONAL SEMANTIC GRAPH We experiment with model variants based on different types of graph structures. Specifically, we compare our compositional semantic graph against a graph built upon the turn-level global semantics. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. We also experiment with a fully connected graph structure. In each graph structure, we experiment with temporally ordered edges (TODirect). This is enforce by adding a check whether Get_Dial_Turn(sj) > Get_Dial_Turn(si) in line 11 and removing line 12 in Algorithm 1. From the results in Table 4, we observe that: (1) based on the CIDEr metric, the best performing graph structure is the compositional semantic graph while the global semantic graph and fully connected graph structure are almost equivalent. This is consistent with the previous insight in machine reading comprehension research that entity lexical overlaps between knowledge bases are often overlooked by global embeddings (Ding et al., 2019) and it is not reliable to construct a knowledge graph based on global representations alone. (2) regarding the direction of edges, bidirectional edges and temporally ordered edges perform similarly, indicating that processing dialogue turns following temporal orders provides enough information and backward processing is only supplementary. C ADDITIONAL QUALITATIVE ANALYSIS In Figure 4, we demonstrate examples outputs of reasoning paths and dialogue responses and have the following observations: • For questions that do not involve actions and can be answered by a single frame, there is typically no reasoning path, i.e. the path only includes the current turn (Example A and B). These questions are usually simple and they are rarely involved in multiple dialogue turns. • In many cases, the dialogue agent can predict an appropriate path but still not generate the correct answers (Example D and G). These paths are able to connect turns that are most relevant to the current turns but these past turns do not contain or contain very limited clues to the expected answers. For example, in Example F, the 2nd and 4th turn are linked by the lexical component for “the woman”. However, they do not have useful information relevant to the current turn, i.e. her clothes. • Finally, our approach shows that the current benchmark, AVSD, typically contains one-hop (Example C, D, E) to two-hop (Example F, G, H) reasoning paths over dialogue context. We hope future dialogue benchmarks will factor in the complexity of dialogue context in terms of reasoning hops to facilitate better research of intelligent dialogue systems. Discussion of failure cases. From the above observations, we identify the following scenarios that our models are susceptible to and propose potential directions for improvement. • Long complex utterances. One limitation of our methods is its dependence on syntactical parser methods to decompose a sentence into sub-nodes. In most dialogues, this problem is not too serious due to the short length of utterances, usually just a single sentence. However, in cases that the utterance contains multiple sentences/clauses or exhibits usage of spoken language with loose linguistic syntax, the parser may fail to decompose it properly. For instance, in Example G in Figure 4, the ground-truth answer contains a causality-based clause (“because”), making it harder to identify sub-nodes such as “sneeze” or “dusty”. • Contextualized semantic similarity. Another area we can improve upon this method is to inject some forms of sentence-level contextual cues into each sub-node to improve their semantic representations. For instance, in a hypothetical dialogue that involves 2 question utterances such as the 2nd turn in Example A and the 6th turn in Example E in Figure 4, our method might not detect the connection between these two as they do not have overlap component sub-nodes. However, they are both related to the audio aspect of the video and a reasoning path between these two turns is appropriate. D STATISTICS OF LOCAL VS. GLOBAL SEMANTIC GRAPHS In Table 6, we report the statistics of graph structures constructed by local and global semantics in all data splits of the AVSD benchmark. We observe that constructing graphs with local semantics result in a lower number of instances with no reasoning paths than making graphs with global semantics. This is due to compositionality in our method, resulting in higher lexical overlap between dialogue turns. With our method, the number of sub-nodes per dialogue turn is more than 4 on average, making it easier to connect dialogue turns. This also leads to a larger and more diverse set of reasoning paths for supervision learning. In local semantic graphs, the average number of reasoning paths per dialogue turn is 2 to 3 on average, higher than this number in global semantic graphs. Although our method requires additional computational effort to constructing these graphs, it is scalable to the size of the dialogue, i.e. number of the dialogue turns. To efficiently construct these graphs in a dialogue, the semantic graph of a dialogue turn can be built on top of the semantic graph of the last turn. This is done by simply adding the new sub-nodes to the last turn’s semantic graph and defining new edges adjacent to these sub-nodes only. In this way, the complexity of our graph construction method is linear to the number of dialogue turns.
1. What is the focus of the paper regarding video-grounded multi-turn QA? 2. What are the strengths and weaknesses of the proposed method in terms of its ability to exploit dialogue information and handle temporal dependencies? 3. How does the method employ multimodal reasoning, and what are the benefits of using turn-level attention and propagating multi-model turn-level embeddings? 4. Can you provide more explanations on how the message passing part is trained, particularly in section 3.4? 5. How does the method handle cases where each pair of turns may share multiple pairs of lexical spans that are identical? 6. Can you provide an analysis of failure cases to further support the effectiveness of the proposed approach? 7. What is the initial D0 correspond to in section 3.3, equation (4)? 8. Does the reasoning path generator include At during inference? 9. Can you clarify the use of symbols before their definition, such as V in section 3.1 and section 3.2? 10. Are there any minor concerns or typos in the paper that need to be addressed?
Review
Review The paper studies the problem of video-grounded multi-turn QA and adopts reasoning paths to exploit dialogue information. Sequential: fail to exploit long turn dependencies Graphical: fixed structure, fail to factor temporal dependencies The proposed reasoning path method: balanced between sequential and graphical It first constructs a turn-level semantic graph based on overlapping lexical span: Extract lexical spans from each turn (<Q, A> pair) using a (Stanford) parser Two turns are connected if one of their corresponding lexical spans are similar (in terms of word2vec embedding). Then it trains a path generator to predict paths from each turn to its preceding turns: It starts from the current turn and auto-regressively finds the most dependent preceding turn with Transformers The turn-level semantic graph is used to mask the dependencies. It is trained with supervised loss where the target paths are constructed by running BFS on the semantic graph Finally, the proposed paths are used to employ multimodal reasoning: Visual features are combined with turn level attention Multi-model turn-level embeddings are propagated using GCN Then use SOTA decoder to generate language response The author conducts experiments on a benchmark, and the proposed method achieves better QA performance than SOTA without a pre-trained language model and achieves comparable performance when the pre-trained language model is involved. The author further studies different variations of graph structures and show that using graphs constructed based on lexical spans is better than fully connected graphs or graphs based on whole sentence embedding. And it also shows that including bidirectional edges does not necessarily improve the performance. A nice feature of the method is that the generated reasoning path can serve as extra explanations for the answer. Some concerns: The model is graph-based thus is restricted to scenarios with a small number of turns, and becomes computationally expensive for long-conversation scenarios. Need a more detailed explanation of how the message passing part (section 3.4) is trained. Each pair of turns may share multiple pairs of lexical spans that are identical, e.g. Figure 3-A, “she” in turn 10, but there are 2 “she”s in turn 9. Does the frequency influence the similarity? It would be more convincing if it gives an analysis of failure cases. Section 3.3: Eq.(4), what is the initial D 0 correspond to Z 0 ? The reasoning path generator uses C t as input, does it include A t during inference? Minor concerns: Many symbols are used before their definition: Explanation of V is first given in Algorithm 1 (section 3.2) but first used in Eq.1 (section 3.1). Section 3.3, 2nd paragraph, 4th line: undefined symbols $\hat{r}1,\dots,\hat{r}{m-1}$. They are later mentioned as turn indices in section 3.4, last line of page 5. Page 5, line 2: “incorporate” —> “incorporates”. Index m is used as a word position in Eq.(1) but becomes a decoding step from section 3.3.
ICLR
Title Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues Abstract Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multiturn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues. 1 INTRODUCTION Traditional visual question answering (Antol et al., 2015; Jang et al., 2017) involves answering questions about a given image. Extending from this line of research, recently Das et al. (2017); Alamri et al. (2019) add another level of complexity by positioning each question and answer pair in a multi-turn or conversational setting (See Figure 1 for an example). This line of research has promising applications to improve virtual intelligent assistants in multi-modal scenarios (e.g. assistants for people with visual impairment). Most state-of-the-part approaches in this line of research (Kang et al., 2019; Schwartz et al., 2019b; Le et al., 2019) tackle the additional complexity in the multi-turn setting by learning to process dialogue context sequentially turn by turn. Despite the success of these approaches, they often fail to exploit the dependencies between dialogue turns of long distance, e.g. the 2nd and 5th turns in Figure 1. In long dialogues, this shortcoming becomes more obvious and necessitates an approach for learning long-distance dependencies between dialogue turns. To reason over dialogue context with long-distance dependencies, recent research in dialogues discovers graph-based structures at the turn level to predict the speaker’s emotion (Ghosal et al., 2019) or generate sequential questions semi-autoregressively (Chai & Wan, 2020). Recently Zheng et al. (2019) incorporate graph neural models to connect the textual cues between all pairs of dialogue turns. These methods, however, involve a fixed graphical structure of dialogue turns, in which only a small number of nodes contains lexical overlap with the question of the current turn, e.g. the 1st, 3rd, and 5th turns in Figure 1. These methods also fail to factor in the temporality of dialogue turns as the graph structures do not guarantee the sequential ordering among turns. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model learns a reasoning path that traverses through dialogue turns to propagate contextual cues that are densely related to the semantics of the current questions. Our approach balances between a sequential and graphical process to exploit dialogue information. Our work is related to the long-studied research domain of discourse structures, e.g. (Barzilay & Lapata, 2008; Feng & Hirst, 2011; Tan et al., 2016; Habernal & Gurevych, 2017). A form of discourse structure is argument structures, including premises and claims and their relations. Argument structures have been studied to assess different characteristics in text, such as coherence, persuasiveness, and susceptibility to attack. However, most efforts are designed for discourse study in monologues and much less attention is directed towards conversational data. In this work, we investigate a form of discourse structure through semantic graphs built upon the overlap of component representations among dialogue turns. We further enhance the models with a reasoning path learning model to learn the best information path for the next utterance generation. To learn a reasoning path, we incorporate our method with bridge entities, a concept often seen in reading comprehension research, and earlier used in entity-based discourse analysis (Barzilay & Lapata, 2008). In reading comprehension problems, bridge entities denote entities that are common between two knowledge bases e.g. Wikipedia paragraphs in HotpotQA (Yang et al., 2018b). In discourse analysis, entities and their locations in text are used to learn linguistic patterns that indicate certain qualities of a document. In our method, we first reconstruct each dialogue turn (including question and answer) into a set of component sub-nodes (e.g. entities, action phrases) using common syntactical dependency parsers. Each result dialogue turn contains sub-nodes that can be used as bridge entities. Our reasoning path learning approach contains 2 phases: (1) first, at each dialogue turn, a graph network is constructed at the turn level. Any two turns are connected if they have an overlapping sub-node or if two of their sub-nodes are semantically similar. (2) secondly, a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question. The predicted path is used as a skeleton layout to propagate visual features through each step of the path. Specifically, in PDC, we adopt non-parameterized approaches (e.g. cosine similarity) to construct the edges in graph networks and each sub-node is represented by pre-trained word embedding vectors. Our path generator is a transformer decoder that regressively generates the next turn index conditioned on the previously generated turn sequence. Our reasoning model is a combination of a vanilla graph convolutional network (Kipf & Welling, 2017) and transformer encoder (Vaswani et al., 2017). In each traversing step, we retrieve visual features conditioned by the corresponding dialogue turn and propagate the features to the next step. Finally, the propagated multimodal features are used as input to a transformer decoder to predict the answer. Our experimental results show that our method can improve the results on the Audio-Visual SceneAware Dialogues (AVSD) generation settings (Alamri et al., 2019), outperform previous state-ofthe-art methods. We evaluate our approach through comprehensive ablation analysis and qualitative study. PDC model also provides additional insights on how the inherent contextual cues in dialogue context are learned in neural networks in the form of a reasoning path. 2 RELATED WORK Discourses in monologues. Related to our work is the research of discourse structures. A longstudied line of research in this domain focuses on argument mining to identify the structure of argument, claims and premises, and relations between them (Feng & Hirst, 2011; Stab & Gurevych, 2014; Peldszus & Stede, 2015; Persing & Ng, 2016; Habernal & Gurevych, 2017). More recently, Ghosh et al. (2016); Duthie & Budzynska (2018); Jiang et al. (2019) propose to learn argument structures in student essays and official debates. In earlier approaches, Barzilay & Lapata (2008); Lin et al. (2011); Feng et al. (2014) study discourses to derive coherence assessment methods through entity-based representations of text. These approaches are proposed from linguistic theories surrounding entity patterns in discourses, i.e. how they are introduced and discussed (Grosz et al., 1995). Guinaudeau & Strube (2013); Putra & Tokunaga (2017) extend prior work with graphical structures in which sentence similarity is calculated based on semantic vectors representing those sentences. These lines of research show that studying discourse structures is useful in many tasks, such as document ranking and discrimination. However, most of these approaches are designed for monologues rather than dialogues. Discourses in dialogues. More related to our problem setting is discourse research on text in a multi-turn setting. Murakami & Raymond (2010); Boltužić & Šnajder (2014); Swanson et al. (2015); Tan et al. (2016); Niculae et al. (2017); Morio & Fujita (2018); Chakrabarty et al. (2019) introduce new corpus and different methods to mine arguments in online discussion forums. Their models are trained to extract claims and premises in each user post and identify their relations between argument components in each pair of user posts. More recently, Li et al. (2020a); Jo et al. (2020) extend argument mining in online threads to identify attackability and persuasiveness in online posts. In this work, we address the problem of video-grounded dialogue, in which dialogue turns are often semantically connected by a common grounding information source, a video. In this task, a discourse-based approach enables dialogue models to learn to anticipate the upcoming textual information in future dialogue turns. However, directly applying prior work on discourse or argument structures into video-grounded dialogues is not straightforward due to the inherent difference between online discussion posts and video-grounded dialogues. In video-grounded dialogues, the language is often closer to spoken language and there are fewer clear argument structures to be learned. Moreover, the presence of video necessitates the interaction between multiple modalities, text and vision. Incorporating traditional discourse structures to model cross-modality interaction is not straightforward. In this work, we propose to model dialogue context by using compositional graphical structures and constructing information traversal paths through dialogue turns. Graph-based dialogue models. Related to our work is research study that investigates different types of graph structures in dialogue. Hu et al. (2019); Shi & Huang (2019); Zhu et al. (2020) address the “reply_to” relationship among multi-party dialogues through graph networks that incorporate conversational flows in comment threads on social networks, e.g. Reddit and Ubuntu IRC, and online games. Zheng et al. (2019) propose a fully connected graph structure at the turn level for visual dialogues. Concurrently, Ghosal et al. (2019) also propose a fully connected graph structure with heterogeneous edges to detect the emotion of participating speakers. All of these methods discover graph structures connecting pairs of dialogue turns of little lexical overlap, resulting in sub-optimal feature propagation. This drawback becomes more significant in question answering problems in multi-turn settings. Our approach constructs graph networks based on compositional similarities. Reasoning path learning. Our method is also motivated by the recent research of machine reading comprehension, e.g. WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018a). De Cao et al. (2019); Qiu et al. (2019) construct graph networks of supporting documents with entity nodes that are connected based on different kinds of relationships. Tu et al. (2019); Tang et al. (2020) enhance these methods with additional edges connecting output candidates and documents. Extended from these methods are path-based approaches that learn to predict a reasoning path through supporting documents. Kundu et al. (2019); Asai et al. (2020) score and rank path candidates that connect entities in question to the target answer. A common strategy among these methods is the use of bridge entities. However, unlike reading comprehension, dialogues are normally not entity-centric and it is not trivial to directly adopt bridge entities into dialogue context. Cross-modality feature learning. Our work is related to study that integrates visual and linguistic information representation. A line of research in this domain is the problem of visual QA, e.g. (Minh Le et al., 2020; Gao et al., 2019). Closer to our method are methods that adopt compositionality in textual features. Specifically, Socher et al. (2014) introduce image and language representation learning by detecting the component lexical parts in sentences and combining them with image features. The main difference between these approaches and our work is the study of cross-modalities in a multi-turn setting. Our approach directly tackles the embedded sequential order in dialogue utterances and examines how cross-modality features are passed from turn to turn. 3 METHOD To describe our PDC model, we introduce a new graph-based method (Section 3.2) that constructs a graph structure to connect turn-level representations in dialogue context based on their compositional semantics. The compositional semantics consists of sub-nodes detected through syntactical dependency parsing methods. We enhance our approach with a path-based propagation method (Section 3.3) to narrow down the contextual information that facilitates question answering of the current turn. Our approach integrates a strong strategy to model dialogue flows in the form of graphical and path-based information such that contextual linguistic information is exploited to propagate relevant visual features (Section 3.4). Figure 2 demonstrates an overview of our method. 3.1 PROBLEM DEFINITION The inputs to a question answering problem in a multi-turn setting consist of a dialogue D and the visual input of a video I. Each dialogue contains a sequence of dialogue turns, each of which is a pair of question Q and answer A. At each dialogue turn t, we denote the dialogue context Ct as all previous dialogue turns Ct = {(Qi,Ai)}|i=t−1i=1 . Since it is positioned in a dialogue, the question of turn t Qt might be dependent on a subset of the dialogue context Ct. The output is the answer of the current turn Ât. Each textual component, i.e. Q and A, is represented as a sequence of token or word indices {wm}|m=Lm=1 ∈ |V|, where L is the sequence length and V is the vocabulary set. The objective of the task is the generation objective that output answers of the current turn: Ât = arg max At P (At|I, Ct,Qt;θ) = arg max At LA∏ m=1 Pm(wm|At,1:m−1, I, Ct,Qt;θ) (1) 3.2 COMPOSITIONAL SEMANTIC GRAPH OF DIALOGUE CONTEXT The semantic relations between dialogue turns are decomposed to semantic relations between subnodes that constitute each turn. These composition relations serve as strong clues to determine how a dialogue turn is related to another. We first employ a co-reference resolution system, e.g. (Clark & Manning, 2016), to replace pronouns with the original entities. We then explore using the Stanford parser system1 to discover sub-nodes. The parser decomposes each sentence into grammatical components, where a word and its modifier are connected in a tree structure. For each dialogue turn, we concatenate the question and answer of that turn as input to the parser. The output dependency tree is pruned to remove unimportant constituents and merge adjacent nodes to form a semantic unit. 1v3.9.2 retrieved at https://nlp.stanford.edu/software/lex-parser.shtml A graph structure G is then constructed. Any two turns are connected if one of their corresponding sub-nodes are semantically similar. To calculate the similarity score, we obtain their pre-trained word2vec embeddings2 and compute the cosine similarity score. Algorithm 1 provides the details of the procedure to automatically construct a semantic graph. Note that our approach can also be applied with other co-reference resolution systems, parser, or pre-trained embeddings. Unlike graph structures in machine reading comprehension such as Wikipedia graph, the semantic graph G is not fixed throughout the sample population but is constructed for each dialogue and at each turn. Algorithm 1: Compositional semantic graph of dialogue context Data: Dialogue context Ct, question of the current turn Qt Result: Semantic graph G = (V, E) 1 begin 2 T ←− ∅; G = {V, E}; E ←− ∅; V ←− ∅; S ←− ∅; 3 H ←− Coreference_Resolution([Ct;Qt]); 4 for each dialogue turn h ∈ H do 5 Th ←− Merge_Nodes(Prune_Tree(Dependency_Parse(h))); T ←− T ∪ {Th}; 6 V ←− V ∪ {h}; E ←− E ∪ {〈Turn_Position(h),Turn_Position(h)〉} 7 for each dependency tree T = (VT , ET ) ∈ T do S ←− S ∪ {VT } 8 for each sub-node si ∈ S do 9 for each sub-node sj ∈ S do 10 if not In_Same_Turn(si, sj) and Is_Similar(si, sj) then 11 E ←− E ∪ {〈Get_Dial_Turn(si),Get_Dial_Turn(sj)〉} 12 E ←− E ∪ {〈Get_Dial_Turn(sj),Get_Dial_Turn(si)〉} 13 return G 3.3 LEARNING TO GENERATE REASONING PATHS Our proposed compositional approach to construct a semantic graph in dialogue context ensures lexical overlaps with the question, but the graph structure does not guarantee the temporal order of dialogue turns. To ensure this sequential information is maintained, we train a generator to predict reasoning paths that traverse through current dialogue turn to past dialogue turns. We use a Transformer decoder to model the reasoning paths from the current turn t. The first position of the path, z0 is initialized with the turn-level position embedding of t. The next turn index is generated auto-regressively by conditioning on the previously generated path sequence: z0 = Embed(t) ∈ Rd (2) Z0:m−1 = Embed([t; r̂1, ..., r̂m−1]) (3) where r̂i denotes a predicted dialogue turn index. The dialogue context and question of the current turn are represented by embedding vectors of their component tokens. Following Vaswani et al. (2017), their representations are enhanced with the sine-cosine positional encoding PosEncode. Qt = Embed(Qt) + PosEncode(Qt) ∈ RLQt×d (4) Ct = Embed(Ct) + PosEncode(Ct) ∈ RLCt×d (5) Note that the dialogue context representation Ct is the embedding of dialogue turns up to the last turn t− 1, excluding answer embedding of the current turn At. We denote a Transformer attention block as Transformer(query, key, value). The path generator incorporates contextual information through attention layers on dialogue context and question. D (1) path = Transfromer(Z0:m−1, Z0:m−1, Z0:m−1) ∈ R m×d (6) D (2) path = Transfromer(D (1) path, Qt, Qt) ∈ R m×d (7) D (3) path = Transfromer(D (2) path, Ct, Ct) ∈ R m×d (8) 2https://code.google.com/archive/p/word2vec/ At the m-th decoding step (m ≥ 1), our model selects the next dialogue turn among the set of dialogue turns that are adjacent to one at (m − 1)-th decoding step in the semantic graph. This is enforced through masking the softmax output scores in which non-adjacent turn indices are assigned to a very low scalar smasked. We denote the adjacency matrix of semantic graph G = (V, E) as a square matrix A of size |V| × |V| where Ai,j = 1 if 〈i, j〉 ∈ E and Ai,i = 1∀i = 1, ..., |V|. The probability of decoded turns at the m-th decoding step is: Pm = softmax(D (3) path,mWpath) ∈ R |V |, Pm,i = smasked∀i|Ar̂m−1,i = 0 (9) where Wpath ∈ Rd×|V |. The decoding process is terminated when the next decoded token is an [EOP] (end-of-path) token. During inference time, we adopt a greedy decoding approach. Due to the small size of V , we found that a greedy approach can perform as well as beam search methods. The computational cost of generating reasoning paths in dialogue context is, thus, only dependent on the average path length, which is bounded by the maximum number of dialogue turns. Data Augmentation. We train our path generator in a supervision manner. At each dialogue turn t with a semantic graph G, we use a graph traversal method, e.g. BFS, to find all paths that start from the current turn to any past turn. We maintain the ground-truth paths with dialogue temporal order by keeping the dialogue turn index in path position m lower than the turn index in path position m− 1. We also narrow down ground-truth paths based on their total lexical overlaps with the expected output answers. Using the dialogue in Figure 1 as an example, using BFS results in three potential path candidates: 5→ 4, 5→ 2, and 5→ 4→ 2. We select 5→ 4→ 2 as the ground-truth path because it can cover the most sub-nodes in the expected answers. If two paths have the same number of lexical overlaps, we select one with a shorter length. If two paths are equivalent, we randomly sample one path following uniform distribution at each training step. Ground-truth reasoning paths are added with [EOP] token at the final position for termination condition. The objective to train the path generator is the generation objective of reasoning path at each dialogue turn: R̂t = arg max Rt P (Rt|Ct,Qt;φ) = arg max Rt Lpath∏ m=1 Pm(rm|Rt,1:m−1, Ct,Qt;φ) (10) 3.4 MULTIMODAL REASONING FROM REASONING PATHS The graph structure G and generated path R̂t are used as layout to propagate features of both textual and visual inputs. For each dialogue turn from V , we obtain the corresponding embeddings and apply mean pooling to get a vector representation. We denote the turn-level representations of V as V ∈ Rd×|V |. We use attention to retrieve the turn-dependent visual features from visual input. M = Transformer(V, I, I) ∈ Rd×|V | (11) where I is a two-dimensional feature representation of visual input I. We define a new multimodal graph based on semantic graph G: Gmm = (Vmm, Emm) where Vmm = M and edges 〈mi,mj〉 ∈ Emm∀i, j|〈i, j〉 ∈ E . We employ a vanilla graph convolution network (Kipf & Welling, 2017) to update turn-level multimodal representations through message passing along all edges. ek = 1 |Ωk| ∑ mj∈Ωk f(mk,mj), e = 1 |V | ∑ k ek, m̃k = g(mk, ek, e) (12) where Ωk is the set of adjacent nodes of mk and f(.) and g(.) are non-linear layers, e.g. MLP and their inputs are just simply concatenated. To propagate features along a reasoning path R̂t, we utilize the updated turn-level multimodal representations M̃ ∈ |V | and traverse the path sequentially through the representation of the corresponding turn index rm in each traversing step. Specifically, We obtain G = {m̃r̂0 , m̃r̂1 ...} ∈ RLpath×d. The traversing process can be done through a recurrent network or a transformer encoder. G̃ = Transformer(G,G,G) ∈ RLpath×d (13) To incorporate propagated features into the target response, we adopt a state-of-the-art decoder model from (Le et al., 2019) that exploits multimodal attention over contextual features. Specifically, We integrate both M̃ and G̃ at each response decoding step through two separate attention layers. Besides, we also experiment with integrating propagated features with decoder as Transformer language models. Transformer language models have shown impressive performance recently in generation tasks by transferring language representations pretrained in massive data (Radford et al., 2019). To integrate, we simply concatenate M̃ and G̃ to the input sequence embeddings as input to language models, similar as (Le & Hoi, 2020; Li et al., 2020b). Optimization. The multimodal reasoning model is learned jointly with other model components. All model parameters are optimized through the objectives from both Equation 1 and 10. We use the standard cross-entropy loss which calculates the logarithm of each softmax score at each decoding position of Ât and R̂t. 4 EXPERIMENTS Dataset. We use the Audio-Visual Sene-Aware Dialogue (AVSD) benchmark developed by Alamri et al. (2019). The benchmark focuses on dialogues grounded on videos from the Charades dataset (Sigurdsson et al., 2016). Each dialogue can have up to 10 dialogue turns, which makes it an appropriate choice to evaluate our approach of reasoning paths over dialogue context. We used the standard visual features I3D to represent the video input. We experimented with the test splits used in the 7th Dialogue System Technology Challenge (DSTC7) (Yoshino et al., 2019) and DSTC8 (Kim et al., 2019). Please see the Appendix A for our experimental setups. Overall Results. The dialogues in the AVSD benchmark focuses on question answering over multiple turns and entail less semantic variance than open-domain dialogues. Therefore, we report the objective scores, including BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015), which are found to have strong correlation with human subjective scores (Alamri et al., 2019). In Table 1 and 3, we present the test results of our models in comparison with previous models in DSTC7 and DSTC8 respectively. In both test splits, our models achieve very strong performance against models without using pre-trained language models. Comparing with models using pre-trained models and additional fine-tuning, our models achieve competitive performances in both test splits. The performance gain of our models when using GPT2 indicates current model sensitivity to language modelling as a generator. A unique benefit of our models from prior approaches is the insights of how the models exploit information from dialogue turns in the form of reasoning paths (Please see example outputs in Figure 3). Ablation Analysis. In Table 4 we report the results of path learning in a global semantic graph. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. In this case, to train the path generator, we obtain the ground-truth path by using BFS to traverse to the node with the most sentence-level similarity score to the expected answer. We observe that: (1) models that learn paths based on component lexical overlaps results in better performance than paths based on global lexical overlaps in most of the objective metrics. (2) Propagation by reasoning path alone without using GCN does not result in better performance. This can be explained as the information in each traversal step is not independent but still contains semantic dependencies to other turns. It is different from standard reading comprehension problems where each knowledge base is independent and it is not required to propagate features through a graph structure to obtain contextual updates. Please see the Appendix B for additional analysis of Table 4. Impacts of Reasoning Path Learning. We compare models that can learn reasoning paths against those that use a fixed propagation path through the past dialogue turns. From Table 5, we observe that: (1) learning dynamic instance-based reasoning paths outperforms all models that propagate through a default path. This is achieved by using the reasoning path as a skeleton for feature propagation as well as adopting the joint training strategy. We can consider dynamically learned paths as an ideal traversal path to propagate visual cues among all possible paths within the semantic graph of the dialogue context. (2) our path generator can generate reasoning paths well and the model with learned paths can perform as well as one using the oracle paths. (3) due to the short length of reasoning paths (limited by the maximum dialogue length), either beam search or greedy decoding approach is good enough to generate paths. The greedy approach has the advantage of much lower computational cost. Qualitative Analysis. In Figure 3, we demonstrate some examples of our predicted responses and the corresponding reasoning paths. Specifically, we showcase samples in which the reasoning paths are 2-hops (Example A and B) and 3-hops (Example C and D), and the distance in each hop can be over one dialogue turn (Example B and D) or more (Example A and C). The example reasoning paths show to be able to connect a sequence of dialogue turns that are most relevant to questions of the current turn. For instance, in Example A, the reasoning path can connect the 7th and 9th turn to the current turn as they contain lexical overlaps, i.e. “the bag”, and “the cushion”. The path skips the 8th turn which is not relevant to the current question. Likewise, in Example C, the path skips the 4− 8th turns. All examples show that dialogue context can be used to extract additional visual clues relevant to the current turn. Information from dialogues, thus, deserves more attention than just being used as a background text input. Please see the Appendix C for additional analysis. 5 CONCLUSION We proposed PDC, a novel approach to learning a reasoning path over dialogue turns for videogrounded dialogues. Our approach exploits the compositional semantics in each dialogue turn to construct a semantic graph, which is then used to derive an optimal path for feature propagation. Our experiments demonstrate that our model can learn to retrieve paths that are most relevant to the current question. We hope our approach can motivate further study to investigate reasoning over multiple turns, especially in complex settings with interconnected dialogue flows (Sun et al., 2019). ACKNOWLEDGMENTS We thank all reviewers for their insightful feedback on the manuscript of this paper. The first author of this paper is supported by the Agency for Science, Technology and Research (A*STAR) Computing and Information Science scholarship. A EXPERIMENTAL SETUP We experiment with the Adam optimizer (Kingma & Ba, 2015). The models are trained with a warm-up learning rate period of 5 epochs before the learning rate decays and the training finishes up to 50 epochs. The best model is selected by the average loss in the validation set. All model parameters, except the decoder parameters when using pre-trained language models, are initialized with uniform distribution (Glorot & Bengio, 2010). The Transformer hyper-parameters are fine-tuned by validation results over d = {128, 256}, h = {1, 2, 4, 8, 16}, and a dropout rate from 0.1 to 0.5. Label smoothing (Szegedy et al., 2016) is applied on labels of Ât (label smoothing does not help when optimizing over R̂t as the labels are limited by the maximum length of dialogues, i.e. 10 in AVSD). B IMPACTS OF COMPOSITIONAL SEMANTIC GRAPH We experiment with model variants based on different types of graph structures. Specifically, we compare our compositional semantic graph against a graph built upon the turn-level global semantics. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. We also experiment with a fully connected graph structure. In each graph structure, we experiment with temporally ordered edges (TODirect). This is enforce by adding a check whether Get_Dial_Turn(sj) > Get_Dial_Turn(si) in line 11 and removing line 12 in Algorithm 1. From the results in Table 4, we observe that: (1) based on the CIDEr metric, the best performing graph structure is the compositional semantic graph while the global semantic graph and fully connected graph structure are almost equivalent. This is consistent with the previous insight in machine reading comprehension research that entity lexical overlaps between knowledge bases are often overlooked by global embeddings (Ding et al., 2019) and it is not reliable to construct a knowledge graph based on global representations alone. (2) regarding the direction of edges, bidirectional edges and temporally ordered edges perform similarly, indicating that processing dialogue turns following temporal orders provides enough information and backward processing is only supplementary. C ADDITIONAL QUALITATIVE ANALYSIS In Figure 4, we demonstrate examples outputs of reasoning paths and dialogue responses and have the following observations: • For questions that do not involve actions and can be answered by a single frame, there is typically no reasoning path, i.e. the path only includes the current turn (Example A and B). These questions are usually simple and they are rarely involved in multiple dialogue turns. • In many cases, the dialogue agent can predict an appropriate path but still not generate the correct answers (Example D and G). These paths are able to connect turns that are most relevant to the current turns but these past turns do not contain or contain very limited clues to the expected answers. For example, in Example F, the 2nd and 4th turn are linked by the lexical component for “the woman”. However, they do not have useful information relevant to the current turn, i.e. her clothes. • Finally, our approach shows that the current benchmark, AVSD, typically contains one-hop (Example C, D, E) to two-hop (Example F, G, H) reasoning paths over dialogue context. We hope future dialogue benchmarks will factor in the complexity of dialogue context in terms of reasoning hops to facilitate better research of intelligent dialogue systems. Discussion of failure cases. From the above observations, we identify the following scenarios that our models are susceptible to and propose potential directions for improvement. • Long complex utterances. One limitation of our methods is its dependence on syntactical parser methods to decompose a sentence into sub-nodes. In most dialogues, this problem is not too serious due to the short length of utterances, usually just a single sentence. However, in cases that the utterance contains multiple sentences/clauses or exhibits usage of spoken language with loose linguistic syntax, the parser may fail to decompose it properly. For instance, in Example G in Figure 4, the ground-truth answer contains a causality-based clause (“because”), making it harder to identify sub-nodes such as “sneeze” or “dusty”. • Contextualized semantic similarity. Another area we can improve upon this method is to inject some forms of sentence-level contextual cues into each sub-node to improve their semantic representations. For instance, in a hypothetical dialogue that involves 2 question utterances such as the 2nd turn in Example A and the 6th turn in Example E in Figure 4, our method might not detect the connection between these two as they do not have overlap component sub-nodes. However, they are both related to the audio aspect of the video and a reasoning path between these two turns is appropriate. D STATISTICS OF LOCAL VS. GLOBAL SEMANTIC GRAPHS In Table 6, we report the statistics of graph structures constructed by local and global semantics in all data splits of the AVSD benchmark. We observe that constructing graphs with local semantics result in a lower number of instances with no reasoning paths than making graphs with global semantics. This is due to compositionality in our method, resulting in higher lexical overlap between dialogue turns. With our method, the number of sub-nodes per dialogue turn is more than 4 on average, making it easier to connect dialogue turns. This also leads to a larger and more diverse set of reasoning paths for supervision learning. In local semantic graphs, the average number of reasoning paths per dialogue turn is 2 to 3 on average, higher than this number in global semantic graphs. Although our method requires additional computational effort to constructing these graphs, it is scalable to the size of the dialogue, i.e. number of the dialogue turns. To efficiently construct these graphs in a dialogue, the semantic graph of a dialogue turn can be built on top of the semantic graph of the last turn. This is done by simply adding the new sub-nodes to the last turn’s semantic graph and defining new edges adjacent to these sub-nodes only. In this way, the complexity of our graph construction method is linear to the number of dialogue turns.
1. What is the focus and contribution of the paper on visual question answering in a multi-turn or conversational setting? 2. What are the strengths and weaknesses of the proposed approach regarding its ability to simulate dependencies between dialogue turns and form a reasoning path? 3. How does the reviewer assess the novelty and originality of the proposed method, particularly in Step 1, compared to prior works in NLP and multi-modal settings? 4. What are some related works in discourse structure construction, entity graph + semantic similarity graph, and cross-modality representation refinement that the author could have cited and discussed? 5. How does the reviewer evaluate the effectiveness and appropriateness of the transformer model for decoding the constructed reasoning path and generating answers? 6. Are there any concerns or suggestions regarding the clarity, organization, and terminology used in certain sections of the paper, such as Sections 3.3 to 3.4?
Review
Review Summary: This paper addresses the visual question answering in a multi-turn or conversational setting. Given a video (series of frames or images), a model has to reason across space and time to arrive at a correct answer for a given question. This task involves understanding the content and context of dialogue turns, i.e., given a question and N dialogue turns, only M<<N of the dialogue turns are strongly related to the question posed. This paper proposes to simulate the dependencies between dialogue turns, forming a reasoning path, to answer a given question. In a way, the proposed approach selects relevant dialogue turns that are useful to answer the question. There are two steps to make the reasoning path: (1) At each dialogue turn, a graph network is constructed at the turn level. Any two turns are connected if they have an overlapping lexical span or if their lexical spans are semantically similar. (2) Secondly, a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question. Ultimately, the main idea to create a reasoning path is based on compositional semantic similarities. Comments (Technical, Major Flaws of this paper): (A) I am not sure whether the author(s) is aware, but from the NLP perspective, the current method (step 1) is trying to simulate the discourse structure of dialogues. I believe that this is an important direction, and the uniqueness of this works lies in the multi-modality of the input, i.e., possibility of the interplay between texts and images (using the information in both modalities). The claimed novelty in this paper is in the construction or usage of reasoning graph, i.e., to construct a graph structure to connect turn-level representations in dialogue. However, in Step 1, the use of entity and/or compositional similarity to create a graph structure out of a text is not new at all, and the paper fails to cite related works, as if it is the first one to propose this. In fact, the idea has been used in NLP for a long time (albeit mostly in the monologue). I am not sure whether combining entity with action phrases (called "lexical spans" in the paper) is new. Can you confirm whether the proposed "lexical spans" is indeed new to construct/simulate the discourse structure? Regarding step 1, perhaps the main contribution of this paper is applying the idea to dialogues, instead of monologues? Another possible contribution is "filtering out" unimportant semantic relations. In normal discourse structure, all parts of texts are connected in a single structure. However, in the context of this paper, only edges that are relevant to the posed question are used. Unless the paper can discuss the related NLP works for step 1, I can only treat this paper as the extension of the corresponding NLP method in a multi-modal setting. There is an engineering contribution, but not from the methodological (theoretical) perspective. I think the author(s) will benefit much by surveying papers on discourse structures (or the ``shallow" construction of them), instead of machine reading comprehension. Many studies tried to establish discourse structure (albeit in a monologue) using entities that are mentioned and their semantic representations. A few of such works are: R. Barzilay and M. Lapatta. 2008. Modelling Local Coherence: An Entity-based Approach. https://www.aclweb.org/anthology/J08-1001.pdf C. Guinaudeau and M. Strube. 2013. Graph-based Local Coherence Modeling. https://www.aclweb.org/anthology/P13-1010/ J.W.G. Putra and T. Tokunaga. 2017. http://www.aclweb.org/anthology/W/W17/W17-2410.pdf The currently proposed method step 1 seems to be the combination of entity graph + semantic similarity graph in these related works, but the current paper "filters" only edges relevant to the posed question. A related work to construct the discourse structure in dialogues: G. Morio and K. Fujita. 2018. End-to-end Argument Mining for Discussion Threads Based on Parallel Constrained Pointer Architecture. https://www.aclweb.org/anthology/W18-5202.pdf (B) The reasoning model, which is a combination of GCN + transformer can be interesting. However, the idea of cross-modality representation refinement is somewhat similar to what has been studied in VQA. Le, T. M., Le, V., Venkatesh, S., & Tran, T. (2020). Dynamic Language Binding in Relational Visual Reasoning. In IJCAI 2020. Gao, P., Jiang, Z., You, H., Lu, P., Hoi, S. C., Wang, X., & Li, H. (2019). Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In CVPR 2019. (C) After constructing the reasoning path (in response to the given question), the next step is to decode such representation to generate the answer. This paper proposes to use the transformer model to do that. I believe the use of the transformer model to generate text is not new. In fact, the author(s) mentions this in the paper (the last paragraph of Section 3.4). (D) In overall, if we look at the pipeline (system) level, the proposed pipeline is new (the whole process). However, I seriously concern about the step (1) of the proposed method (page 1). My main concern about this paper is its lack of awareness of related works in text processing (step 1 of their method). In fact, it fails to cite relevant works (that are very similar to this work). I might appreciate this paper in terms of engineering contribution (in a multi-modal setting), but I cannot acknowledge that step 1 is novel. Having that said, I think the authors need to provide a comparison to related works, proving the novelty of the current method. I am willing to increase the rating if the authors can properly address my concerns during the rebuttal phase. (E) The content from 3.3 to 3.4 is very hard to follow. Correction of terms: - linguistic dependency parsers --> "syntactical" dependency parsers (this is the correct term) - linguistically, the term "lexical span" is weird. A span is a series of continuous lexicons (in the text surface). I suggest using a better term, as the "lexical span" in this paper might be discontinuous (do I misunderstand?).
ICLR
Title Forced Apart: Discovering Disentangled Representations Without Exhaustive Labels Abstract Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods. 1 INTRODUCTION Representation learning is an important part of deep learning research, and the ability of deep neural networks to transform the input data into a space that is more suitable to the target task is one of the key reasons for their success. Consider the case of binary classification with a neural network with sigmoid activation function on the last layer, where a network transforms the input data x ∈ Rn into a space R where two classes are linearly separable by applying a sequence of non-linear transformations f(x) : Rn → Rk1 → Rk2 → · · · → Rkj → R Note that all representations, learned by the network in the sequence of transformations Ri → Rj , are devoted to one goal: binary classification. The learned intermediate representations can easily be used in tasks similar to the binary classification, but using them in a different task may be problematic. Consider the case of multivariate time series classification with an RNN model, depicted in Figure 1 with a sigmoid activation function in the last FC2 layer and a ReLU activation function in the layer FC1. Note that ReLU activation produces non-negative vectors. During a regular training procedure with binary cross-entropy loss, the model will learn weights that produce two patterns of activation of the layer FC1: roughly orthogonal vectors for the samples that belong to different classes, and roughly parallel vectors for the samples that belong to the same class. Indeed, the value of the output scalar is the result of taking the dot product between the weights w of the final layer FC2 (a single vector in this case) and the output h of the penultimate hidden layer FC1. Via the geometric interpretation of the dot product, this value is highest when the cosine between the vectors 1, and minimized when the cosine is −1. However, since the penultimate layer has the ReLU activation, the vectors cannot point in opposite directions, therefore, they must be orthogonal. maxwTh = max‖w‖‖h‖ cos θ ⇒ θ = 0 (1) minwTh,h ≥ 0 = min‖w‖‖h‖ cos θ ⇒ θ = π 2 (2) hi||hj , if yi = yj (3) hi⊥hi, if yi 6= yj (4) where yi is the corresponding binary label for hidden state hi. In this work, we focus on learning a better representation of the input that could be used in downstream tasks such as clustering. Specifically, we are interested in learning the representation that would enable clustering by virtue of revealing its latent structure, while using the limited information provided by the binary classification task. In order to force the network to learn such diverged representations, we propose two novel loss components that can be applied to an arbitrary cost function and work in both weakly-supervised and unsupervised settings. We evaluate the proposed loss components empirically on two most common types of models, Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) and different types of input data (time series, images, texts). Our approach shows consistent improvement of the quality of KMeans clustering in terms of mutual information scores, outperforming previous methods. 2 RELATED WORK In the past few years, a substantial amount of work has been dedicated to learning a better representation of the input data that can be either used in downstream tasks, such as KMeans clustering, or to improve generalizability or performance of the model. In general, these works can be divided into three categories: (1) approaches that introduce a new loss component that can be easily applied to an arbitrary cost function (discriminative models), (2) approaches that require a complicated or cumbersome training procedure (discriminative models), and (3) probabilistic generative and/or adversarial models. Approaches from the first group propose new loss components that can be applied in a straightforward manner to an arbitrary cost function, supervised or unsupervised. Cheung et al. (2014) proposed a cross-covariance penalty (XCov) to force the network to produce representations with disentangled factors. The proposed penalty is, essentially, cross-covariance between the predicted labels and the activations of samples in a batch. Their experiments showed that the network can produce a representation, with components that are responsible to different characteristics of the input data. For example, in case of the MNIST dataset, there was a class-invariant factor that was responsible for the style of the digit, and in case of the Toronto Faces Dataset (Susskind et al., 2010), there was a factor responsible for the subject’s identity. Similarly, but with a different goal in mind, Cogswell et al. (2015) proposed a new regularizer (DeCov), that minimizes cross-covariance of hidden activations, leading to non-redundant representations and, consequently, less overfitting and better generalization. DeCov loss is trying to minimize the Frobenius norm of the covariance matrix between all pairs of activations in the given layer. The authors’ experiments showed that the proposed loss significantly reduced overfitting and led to a better classification performance on a variety of datasets. The second group of methods requires a modification of the standard training procedure with backpropagation and stochastic gradient descent optimizers. Liao et al. (2016) proposed a method to learn parsimonious representations. Essentially, the proposed algorithm iteratively calculates cluster centroids, which are updated every M iterations and used in the cost function. The authors’ experiments showed that such algorithm leads to a better generalization and a higher test performance of the model in case of supervised learning, as well as unsupervised and even zero-shot learning. Similarly, Xie et al. (2016) proposed an iterative algorithm that first calculates soft cluster assignments, then updates the weights of the network and cluster centroids. This process is repeated until convergence. In contrast to Liao et al. (2016), the authors specifically focused on the task of learning better representations for clustering, and showed that the proposed algorithm gives a significant improvement in clustering accuracy. Finally, a new group of recently emerged methods focus on disentangling the factors of variation (e.g., style and class). Kingma et al. (2014) proposed deep generative models for semi-supervised learning and showed that is possible to generate samples from the target class with variations in style, and vice versa. Makhzani et al. (2015) proposed a new approach, called adversarial autoencoder (AAE) and performed a variety of experiments, including semi-supervised and unsupervised clustering, achieving impressive results on MNIST (LeCun et al., 1998) and Street View House Numbers (Netzer et al., 2011) datasets. However, since this methods includes adversarial networks, the training of such systems is rather cumbersome. For example, in the semi-supervised autoencoders experiments, the training of the system consisted of three different phases: a reconstruction phase, a regularization phase, and a semi-supervised classification phase, where the regularization phase itself consists of two sub-phases of updating discriminator and generator respectively. Finally, (Mathieu et al., 2016) proposed a conditional generative model that is a combination of Variational Autoencoder (Kingma & Welling, 2013) and Generative Adversarial Networks (Goodfellow et al., 2014) for disentangling factors of variations. Our proposed loss components belong to the first group and, in contrast to the other methods do not require a complicated training procedure, can easily be used with any cost function, and work in both weakly-supervised and unsupervised settings. 3 THE PROPOSED METHOD Inspired by Equation 1 and the work of Cheung et al. (2014) and Cogswell et al. (2015), we propose two novel loss components, which despite their simplicity, significantly improve the quality of the clustering over the representations produced by the model. The first loss component Lsingle works on a single layer and does not affect the other layers in the network, which may be a desirable behaviour in some cases. The second loss component Lmulti affects the entire network behind the target layer and forces it to produce disentangled representations in more complex and deep networks in which the first loss may not give the desired improvements. 3.1 SINGLE LAYER LOSS Consider the model in Figure 1. The layer FC2 has output size of 1 and produces a binary classification decision. The output of the layer FC1 is used to perform KMeans clustering. Recall from the example in the introduction that we want to force the model to produce divergent representations for the samples that belong to the same class, but are in fact substantively different from each other. One way to do it would be to force the rows of the weight matrix WFC1 of the FC1 layer to be different from each other, leading to different patterns of activations in the output of the FC1 layer. Formally, it can be expressed as follows: Lsingle = k∑ i=1 k∑ j=i+1 fl(di, dj) + fl(dj , di) (5) where dk are normalized weights of the row k of the weights matrix W of the given layer: dk = softmax(W [k]) (6) and fl(di, dj) is a component of the loss between the rows i and j: fl(xi, xj) = max(0,m−DKL(xi||xj)) (7) wherem is a hyperparameter that defines the desired margin of the loss component andDKL(di||dj) is the Kullback–Leibler divergence1 between the probability distributions di and dj . 3.2 MULTILAYER LOSS Note that the loss component Lsingle affects only the weights of the specific layer, as it operates not on the outputs of the layer but directly on its weights, similar to, for example, `2 regularization. Therefore, this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data. This might be the case in simple shallow networks, but in case of very deep complex networks the input data is nonlinearly transformed so many times that only the information that is needed for binary classification left, and all the remaining latent characteristics of the input data were lost as not important for binary classification (see the Figure 3a). Indeed, as we can see from the experiments in Section 4, the loss component described above substantially improves the quality of clustering in a simple baseline case. However, in the case of a more complex model, this improvement is much less impressive. Therefore, we also propose a loss component that can influence not only one specific layer, but all layers before it, in order to force the network to produce a better representation. Recall again that we want to force the model to produce disentangled representations of the input data. Namely, that these representations should be sufficiently different from each other even if two 1Note that the proposed framework does not limit the choice of divergence measure between the two distributions, for example, the Jensen-Shannon divergence can be used, etc. samples have the same label. We propose the following loss component in order to produce such properties: Lmulti = 1 N2s N∑ i=1 N∑ j=1 { fl(h s i , h s j) + fl(h s j , h s i ) yi = yj 0 yi 6= yj (8) where hsk is a normalized output of the target layer h for the sample k: hsk = softmax(hk) (9) yk is its the ground truth label, N is the number of samples in the batch, Ns is number of samples that have the same label, and fl(hi, hj) is the function defined in Equation 7. Note that this loss component Lmulti works on the outputs of the target layer, and therefore, it affects the whole network behind the layer on which it is applied, overcoming the local properties of the Lsingle loss. 3.3 UNSUPERVISED LEARNING Although our main focus in the presented experiments is on a binary classification task, both of our proposed loss components can be used in unsupervised learning as well. The loss component Lsingle does not require any labels so it can be used without modifications. The loss component Lmulti can be applied to unlabeled data by just taking the summations without consideration of labels of the samples as follows: Lmulti2 = 1 N2 N∑ i=1 N∑ j=1 fl(h s i , h s j) + fl(h s j , h s i ) (10) For example, as autoencoder models are a common choice to learn representations to use in a downstream task, the proposed loss components can be easily applied to its cost function as follows: Lae = (1− α) ∗ 1 N N∑ i=1 ||Xi − X̂i||2 + α ∗ Lmulti (11) where the first part is a standard reconstruction cost for autoencoder, the second is the proposed loss component, and α is a hyperparameter reflecting how much importance is given to it. 3.4 THE MARGIN HYPERPARAMETER m One important choice to be made while using the proposed loss components is the value of the margin hyperparameter m. A larger value of m corresponds to a larger margin between the rows of the weights matrix in case of Lsingle and a larger margin between the activations of the target layer in case of Lmulti. The smaller the value of m, the less influence the proposed loss components have. In our experiments, we found that the proposed loss component Lsingle is relatively stable with respect to the choice ofm, and generally performs better with larger values (in the range 5-10). In case of the loss component Lmulti, we found that even a small value of the marginm (0.1 - 1) disentangles the learned representations better and consequently leads to substantial improvements in the AMI score. In all of the reported experiments, we found that the proposed loss component with a reasonably chosen m does not hurt the model’s performance in the classification task. 4 EXPERIMENTS We performed an extensive set of experiments that covers the two most commonly used in modern research: Recurrent Neural Networks and Convolutional Neural Networks, as well as entirely different modalities of the input data: time series, images, and texts. In all experiments, we used an RNN or an CNN model without any additional loss components as the baseline and compared our proposed loss components Lsingle and Lmulti with the DeCov regularizer (Cogswell et al., 2015) and XCov penalty (Cheung et al., 2014), as those works are most similar to ours. After the model were trained on the binary classification task, we use the output of the penultimate layer to perform a KMeans clustering. We implemented the models used in all experiments with TensorFlow (Abadi et al., 2016) and used Adam optimizer (Kingma & Ba, 2014) to train the them. 4.1 MNIST STROKES SEQUENCES EXPERIMENTS We performed experiments on the MNIST strokes sequences dataset de Jong (2016)2 to evaluate the proposed loss components the in case of an RNN model and time series data. This dataset contains pen strokes, automatically generated from the original MNIST dataset LeCun et al. (1998). Although the generated sequences do not always reflect a choice a human would made in order to write a digit, the strokes are consistent across the dataset. For this experiment, we split the examples into two groups: samples belonging to the classes from 0 to 4 were assigned to the first group, and samples belonging to the classes from 5 to 9 were assigned to the second group. The model is trained to predict the group of a given sample and does not have any access to the underlying classes. We used the model depicted in Figure 1 for this experiment. After the models were trained on the binary classification task, we used the output of the penultimate layer FC2 to perform the KMeans clustering and evaluated the quality of the produced clustering using the original class labels as ground truth assignments. Autoencoder experiments In order to investigate the influence of the proposed loss components in the autoencoder settings, we applied them to an autoencoder model that reconstructs the input sequences from the MNIST strokes sequences dataset. We did not use any label information during this experiments, and used the representation from the intermediate layer of the autoencoder to perform KMeans clustering. 4.2 CIFAR-10 EXPERIMENTS In order to evaluate the proposed loss components on a different type of model and data, we preformed experimented with the CIFAR-10 dataset Krizhevsky & Hinton (2009) using an CNN model. 2https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data As in the MNIST strokes sequences experiments, we split the examples in two groups: samples belonging to the classes “airplan”, “automobile”, “bird”, “cat”, and “deer” were assigned to the first group, and samples belonging to the classes “dog”, “frog”, “horse”, “ship”, “truck” were assigned to the second group. Note that this assignment is quite arbitrary as it simply reflects the order of the labels of the classes in the dataset (namely, the labels 0-4 for the first group and the labels 4-9 for the second group). All groups contain rather different types of objects, both natural and human-made. For these experiments, we used a CNN model based on the VGG-16 architecture (Simonyan & Zisserman, 2014), depicted on the Figure 4. We discarded the bottom fully connected and convolutional layers as, perhaps, they are too big for this dataset. Instead, we appended three convolutional layers to the output of pool3 layer with number of filters 256, 128 and 8 correspondingly. The first two layers use 3x3 convolutions, and the last layer uses 1x1 convolutions. After that, we pass the output through a fully-connected layer of size 15 (FC1), which produces the representations used in clustering, and a fully connected layer of size 1 (FC2) with the sigmoid activation function to produce a binary classification decision. 4.3 TEXT CLASSIFICATION EXPERIMENTS Finally, to prove a wide generalizability of the proposed loss components, we performed text classification experiments using an RNN model again, but on an entirely different type of data. Namely, we used the DBPedia ontology dataset dataset (Zhang et al., 2015), which contains titles and abstract of Wikipedia articles labeled by 14 ontology classes. Again, we split the samples into two groups and trained the model on the binary classification task. Classes “Company”, “EducationalInstitution”, “Artist”, “Athlete”, “OfficeHolder”, “MeanOfTransportation”, “Building” belong to the first group, and the classes “NaturalPlace”, “Village”, “Animal”, “Plant”, “Album”, “Film”, “WrittenWork” belong to the second group. As in subsection 4.1, we used the model depicted on Figure 1. 4.4 IMPLEMENTATION DETAILS Despite the fact the proposed loss components can be directly implemented using two nested for loops, such implementation will not be computationally efficient, as it will lead to a big computational graph operating on separate vectors without using full advantages of highly optimized parallel matrix computations on GPU. Therefore, it is desirable to have an efficient implementation that can use full advantage of modern GPUs. We have developed such an efficient implementation that significantly accelerates the computation of the loss component in return for a higher memory consumption by creating two matrices that contain all combinations of di and dj from the summations in the Equation 5 and performing the operations to calculate the loss on them. We have made our implementation for TensorFlow (Abadi et al., 2016) publicly available on GitHub3 alongside with aforementioned models from the subsection 4.1 and the subsection 4.2. It is worth noting that since the loss component Lsingle operates directly on the weights of the target layer, its computational complexity does not depend on the size of the batch. Instead, it depends on the size of that layer. In contrast, the Lmulti operates on the activations of the target layer on all samples in the batch, and its computational complexity depends on the number of samples in the batch. In practice, using the implementation described above, we were able to train models with batch size of 512 and higher without exhausting the GPU’s memory. 5 RESULTS AND DISCUSSION 5.1 QUANTITATIVE ANALYSIS We report the average of the Adjusted Mutual Information (AMImax) and Normalized Mutual Information (NMIsqrt) scores (Vinh et al., 2010) across three runs in Table 1. On the simplest MNIST strokes sequences dataset Lsingle outperforms all other methods, whereas on more challenging and complex datasets Lmilti works the best, probably due to its ability to influence the learned repre- 3http://github.com/placeholder/ sentations on all layers of the network behind the target layer. The proposed loss components also improves the quality of clustering in the autoencoder settings, although the gain is marginal. It is also important to note that in all our experiments accuracy of models was not affected in a harmful way when we applied the proposed loss components, or the effect was negligible (less than 0.5%). 5.2 QUALITATIVE ANALYSIS To examine the influence of the proposed loss components to the activations of the network, we plot the number of samples, belonging to different underlying classes on the x axis for which the neurons on the y axis were active the most in the binary classification task on Figure 2 and Figure 3 for on MNIST strokes sequences and CIFAR-10 datasets correspondingly. As we can see from these figures, during a regular training with the binary classification objective, without the proposed loss component the models tend to learn representations that is specific to the target binary label, even though the samples within one group come from different classes. The model learns to use mostly just two neurons to discriminate between the target groups and hardly uses the rest of the neurons in the layer. We observe this behaviour across different types of models and datasets: an RNN model applied to a timeseries dataset and an CNN model applied to an image classification dataset behave in the exactly the same way. Both proposed loss components Lsingle and Lmulti force the model to produce diverged representations, and we can see how it changes the patterns of activations in the target layer. It is easy to observe in Figure 2b that the patterns of activations learned by the networks roughly correspond to underlying classes, despite the fact that the network did not have access to them during the training. This pattern is not as easy to see in case of CIFAR-10 dataset (see the Figure 3b), but we can observe that the proposed loss component nevertheless forced the network to activate different neurons for different classes, leading to a better AMI score on the clustering task. In order to further investigate the representations learned by the model, we visualized the representations of samples from the MNIST strokes sequences dataset in Figure 5 using TensorBoard. Figure 5a and Figure 5b in the top row depict the representations learned by the baseline model, colored according to the binary label and the underlying classes, respectively. Figure 5c and Figure 5d in the bottom row depict the representations of the same samples, learned by the model with the Lmulti loss component, colored in the same way. It is easy to see that the Lmulti indeed forced the model to learn disentangled representations of the input data. Note how the baseline model learned dense clusters of objects, with samples from the same group (but different classes) compactly packed in the same area. In contrast, the model with the proposed loss component learned considerably better representations which disentangle samples belonging to different classes and placed the them more uniformly in the space. In the real world, the number of clusters is rarely known beforehand. To systematically examine the stability of the proposed loss component, we plotted the Adjusted Mutual Information scores for the baselines methods andLmulti loss component with respect to the number of clusters in Figure 6, using the CIFAR-10 dataset. As can be seen from Figure 6, our loss component consistently outperforms the previously proposed methods regardless the number of clusters. 6 CONCLUSION In this paper, we propose two novel loss components that substantially improve the quality of KMeans clustering, which uses representations of the input data learned by a given model. We performed a comprehensive set of experiments using two popular neural network architectures (RNNs and CNNs), and different modalities of data (image and text). Our results demonstrate that the proposed loss components consistently increase the Mutual Information scores by a significant margin, and outperform previously proposed methods. In addition, we qualitatively analyzed the representations learned by the network by visualizing the activation patterns and relative positions of the samples in the learned space, showing that the proposed loss components indeed force the network to learn diverged representations.
1. What are the strengths and weaknesses of the paper's proposed regularization terms for learning disentangled representations? 2. How do the proposed regularization terms compare to other methods in terms of clustering performance and classification accuracy? 3. What are some potential design choices for the compound hinge loss, and how do they impact the results? 4. How does the method perform when applied to large networks with limited batch sizes? 5. Are there any specific issues with the experimental design or implementation that could be improved upon?
Review
Review This paper proposes two regularization terms to encourage learning disentangled representations. One term is applied to weight parameters of a layer just like weight decay. The other is applied to the activations of the target layer (e.g., the penultimate layer). The core part of both regularization terms is a compound hinge loss of which the input is the KL divergence between two softmax-normalized input arguments. Experiments demonstrate the proposed regularization terms are helpful in learning representations which significantly facilitate clustering performance. Pros: (1) This paper is clearly written and easy to follow. (2) Authors proposed multiple variants of the regularization term which cover both supervised and unsupervised settings. (3) Authors did a variety of classification experiments ranging from time serials, image and text data. Cons: (1) The design choice of the compound hinge loss is a bit arbitrary. KL divergence is a natural similarity measure for probability distribution. However, it seems that authors use softmax to force the weights or the activations of neural networks to be probability distributions just for the purpose of using KL divergence. Have you compared with other choices of similarity measure, e.g., cosine similarity? I think the comparison as an additional experiment would help explain the design choice of the proposed function. (2) In the binary classification experiments, it is very strange to almost randomly group several different classes of images into the same category. I would suggest authors look into datasets where the class hierarchy is already provided, e.g., ImageNet or a combination of several fine-grained image classification datasets. Additionally, I have the following questions: (1) I am curious how the proposed method compares to other competitors in terms of the original classification setting, e.g., 10-class classification accuracy on CIFAR10. (2) What will happen for the multi-layer loss if the network architecture is very large such that you can not use large batch size, e.g., less than 10? (3) In drawing figure 2 and 3, if the nonlinear activation function is not ReLU, how would you exam the same behavior? Have you tried multi-class classification for the case “without proposed loss component” and does the similar pattern still happen or not? Some typos: (1) In introduction, “when the cosine between the vectors 1” should be “when the cosine between the vectors is 1”. (2) In section 4.3, “we used the DBPedia ontology dataset dataset” should be “we used the DBPedia ontology dataset”. I would like to hear authors’ feedback on the issues I raised.
ICLR
Title Forced Apart: Discovering Disentangled Representations Without Exhaustive Labels Abstract Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods. 1 INTRODUCTION Representation learning is an important part of deep learning research, and the ability of deep neural networks to transform the input data into a space that is more suitable to the target task is one of the key reasons for their success. Consider the case of binary classification with a neural network with sigmoid activation function on the last layer, where a network transforms the input data x ∈ Rn into a space R where two classes are linearly separable by applying a sequence of non-linear transformations f(x) : Rn → Rk1 → Rk2 → · · · → Rkj → R Note that all representations, learned by the network in the sequence of transformations Ri → Rj , are devoted to one goal: binary classification. The learned intermediate representations can easily be used in tasks similar to the binary classification, but using them in a different task may be problematic. Consider the case of multivariate time series classification with an RNN model, depicted in Figure 1 with a sigmoid activation function in the last FC2 layer and a ReLU activation function in the layer FC1. Note that ReLU activation produces non-negative vectors. During a regular training procedure with binary cross-entropy loss, the model will learn weights that produce two patterns of activation of the layer FC1: roughly orthogonal vectors for the samples that belong to different classes, and roughly parallel vectors for the samples that belong to the same class. Indeed, the value of the output scalar is the result of taking the dot product between the weights w of the final layer FC2 (a single vector in this case) and the output h of the penultimate hidden layer FC1. Via the geometric interpretation of the dot product, this value is highest when the cosine between the vectors 1, and minimized when the cosine is −1. However, since the penultimate layer has the ReLU activation, the vectors cannot point in opposite directions, therefore, they must be orthogonal. maxwTh = max‖w‖‖h‖ cos θ ⇒ θ = 0 (1) minwTh,h ≥ 0 = min‖w‖‖h‖ cos θ ⇒ θ = π 2 (2) hi||hj , if yi = yj (3) hi⊥hi, if yi 6= yj (4) where yi is the corresponding binary label for hidden state hi. In this work, we focus on learning a better representation of the input that could be used in downstream tasks such as clustering. Specifically, we are interested in learning the representation that would enable clustering by virtue of revealing its latent structure, while using the limited information provided by the binary classification task. In order to force the network to learn such diverged representations, we propose two novel loss components that can be applied to an arbitrary cost function and work in both weakly-supervised and unsupervised settings. We evaluate the proposed loss components empirically on two most common types of models, Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) and different types of input data (time series, images, texts). Our approach shows consistent improvement of the quality of KMeans clustering in terms of mutual information scores, outperforming previous methods. 2 RELATED WORK In the past few years, a substantial amount of work has been dedicated to learning a better representation of the input data that can be either used in downstream tasks, such as KMeans clustering, or to improve generalizability or performance of the model. In general, these works can be divided into three categories: (1) approaches that introduce a new loss component that can be easily applied to an arbitrary cost function (discriminative models), (2) approaches that require a complicated or cumbersome training procedure (discriminative models), and (3) probabilistic generative and/or adversarial models. Approaches from the first group propose new loss components that can be applied in a straightforward manner to an arbitrary cost function, supervised or unsupervised. Cheung et al. (2014) proposed a cross-covariance penalty (XCov) to force the network to produce representations with disentangled factors. The proposed penalty is, essentially, cross-covariance between the predicted labels and the activations of samples in a batch. Their experiments showed that the network can produce a representation, with components that are responsible to different characteristics of the input data. For example, in case of the MNIST dataset, there was a class-invariant factor that was responsible for the style of the digit, and in case of the Toronto Faces Dataset (Susskind et al., 2010), there was a factor responsible for the subject’s identity. Similarly, but with a different goal in mind, Cogswell et al. (2015) proposed a new regularizer (DeCov), that minimizes cross-covariance of hidden activations, leading to non-redundant representations and, consequently, less overfitting and better generalization. DeCov loss is trying to minimize the Frobenius norm of the covariance matrix between all pairs of activations in the given layer. The authors’ experiments showed that the proposed loss significantly reduced overfitting and led to a better classification performance on a variety of datasets. The second group of methods requires a modification of the standard training procedure with backpropagation and stochastic gradient descent optimizers. Liao et al. (2016) proposed a method to learn parsimonious representations. Essentially, the proposed algorithm iteratively calculates cluster centroids, which are updated every M iterations and used in the cost function. The authors’ experiments showed that such algorithm leads to a better generalization and a higher test performance of the model in case of supervised learning, as well as unsupervised and even zero-shot learning. Similarly, Xie et al. (2016) proposed an iterative algorithm that first calculates soft cluster assignments, then updates the weights of the network and cluster centroids. This process is repeated until convergence. In contrast to Liao et al. (2016), the authors specifically focused on the task of learning better representations for clustering, and showed that the proposed algorithm gives a significant improvement in clustering accuracy. Finally, a new group of recently emerged methods focus on disentangling the factors of variation (e.g., style and class). Kingma et al. (2014) proposed deep generative models for semi-supervised learning and showed that is possible to generate samples from the target class with variations in style, and vice versa. Makhzani et al. (2015) proposed a new approach, called adversarial autoencoder (AAE) and performed a variety of experiments, including semi-supervised and unsupervised clustering, achieving impressive results on MNIST (LeCun et al., 1998) and Street View House Numbers (Netzer et al., 2011) datasets. However, since this methods includes adversarial networks, the training of such systems is rather cumbersome. For example, in the semi-supervised autoencoders experiments, the training of the system consisted of three different phases: a reconstruction phase, a regularization phase, and a semi-supervised classification phase, where the regularization phase itself consists of two sub-phases of updating discriminator and generator respectively. Finally, (Mathieu et al., 2016) proposed a conditional generative model that is a combination of Variational Autoencoder (Kingma & Welling, 2013) and Generative Adversarial Networks (Goodfellow et al., 2014) for disentangling factors of variations. Our proposed loss components belong to the first group and, in contrast to the other methods do not require a complicated training procedure, can easily be used with any cost function, and work in both weakly-supervised and unsupervised settings. 3 THE PROPOSED METHOD Inspired by Equation 1 and the work of Cheung et al. (2014) and Cogswell et al. (2015), we propose two novel loss components, which despite their simplicity, significantly improve the quality of the clustering over the representations produced by the model. The first loss component Lsingle works on a single layer and does not affect the other layers in the network, which may be a desirable behaviour in some cases. The second loss component Lmulti affects the entire network behind the target layer and forces it to produce disentangled representations in more complex and deep networks in which the first loss may not give the desired improvements. 3.1 SINGLE LAYER LOSS Consider the model in Figure 1. The layer FC2 has output size of 1 and produces a binary classification decision. The output of the layer FC1 is used to perform KMeans clustering. Recall from the example in the introduction that we want to force the model to produce divergent representations for the samples that belong to the same class, but are in fact substantively different from each other. One way to do it would be to force the rows of the weight matrix WFC1 of the FC1 layer to be different from each other, leading to different patterns of activations in the output of the FC1 layer. Formally, it can be expressed as follows: Lsingle = k∑ i=1 k∑ j=i+1 fl(di, dj) + fl(dj , di) (5) where dk are normalized weights of the row k of the weights matrix W of the given layer: dk = softmax(W [k]) (6) and fl(di, dj) is a component of the loss between the rows i and j: fl(xi, xj) = max(0,m−DKL(xi||xj)) (7) wherem is a hyperparameter that defines the desired margin of the loss component andDKL(di||dj) is the Kullback–Leibler divergence1 between the probability distributions di and dj . 3.2 MULTILAYER LOSS Note that the loss component Lsingle affects only the weights of the specific layer, as it operates not on the outputs of the layer but directly on its weights, similar to, for example, `2 regularization. Therefore, this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data. This might be the case in simple shallow networks, but in case of very deep complex networks the input data is nonlinearly transformed so many times that only the information that is needed for binary classification left, and all the remaining latent characteristics of the input data were lost as not important for binary classification (see the Figure 3a). Indeed, as we can see from the experiments in Section 4, the loss component described above substantially improves the quality of clustering in a simple baseline case. However, in the case of a more complex model, this improvement is much less impressive. Therefore, we also propose a loss component that can influence not only one specific layer, but all layers before it, in order to force the network to produce a better representation. Recall again that we want to force the model to produce disentangled representations of the input data. Namely, that these representations should be sufficiently different from each other even if two 1Note that the proposed framework does not limit the choice of divergence measure between the two distributions, for example, the Jensen-Shannon divergence can be used, etc. samples have the same label. We propose the following loss component in order to produce such properties: Lmulti = 1 N2s N∑ i=1 N∑ j=1 { fl(h s i , h s j) + fl(h s j , h s i ) yi = yj 0 yi 6= yj (8) where hsk is a normalized output of the target layer h for the sample k: hsk = softmax(hk) (9) yk is its the ground truth label, N is the number of samples in the batch, Ns is number of samples that have the same label, and fl(hi, hj) is the function defined in Equation 7. Note that this loss component Lmulti works on the outputs of the target layer, and therefore, it affects the whole network behind the layer on which it is applied, overcoming the local properties of the Lsingle loss. 3.3 UNSUPERVISED LEARNING Although our main focus in the presented experiments is on a binary classification task, both of our proposed loss components can be used in unsupervised learning as well. The loss component Lsingle does not require any labels so it can be used without modifications. The loss component Lmulti can be applied to unlabeled data by just taking the summations without consideration of labels of the samples as follows: Lmulti2 = 1 N2 N∑ i=1 N∑ j=1 fl(h s i , h s j) + fl(h s j , h s i ) (10) For example, as autoencoder models are a common choice to learn representations to use in a downstream task, the proposed loss components can be easily applied to its cost function as follows: Lae = (1− α) ∗ 1 N N∑ i=1 ||Xi − X̂i||2 + α ∗ Lmulti (11) where the first part is a standard reconstruction cost for autoencoder, the second is the proposed loss component, and α is a hyperparameter reflecting how much importance is given to it. 3.4 THE MARGIN HYPERPARAMETER m One important choice to be made while using the proposed loss components is the value of the margin hyperparameter m. A larger value of m corresponds to a larger margin between the rows of the weights matrix in case of Lsingle and a larger margin between the activations of the target layer in case of Lmulti. The smaller the value of m, the less influence the proposed loss components have. In our experiments, we found that the proposed loss component Lsingle is relatively stable with respect to the choice ofm, and generally performs better with larger values (in the range 5-10). In case of the loss component Lmulti, we found that even a small value of the marginm (0.1 - 1) disentangles the learned representations better and consequently leads to substantial improvements in the AMI score. In all of the reported experiments, we found that the proposed loss component with a reasonably chosen m does not hurt the model’s performance in the classification task. 4 EXPERIMENTS We performed an extensive set of experiments that covers the two most commonly used in modern research: Recurrent Neural Networks and Convolutional Neural Networks, as well as entirely different modalities of the input data: time series, images, and texts. In all experiments, we used an RNN or an CNN model without any additional loss components as the baseline and compared our proposed loss components Lsingle and Lmulti with the DeCov regularizer (Cogswell et al., 2015) and XCov penalty (Cheung et al., 2014), as those works are most similar to ours. After the model were trained on the binary classification task, we use the output of the penultimate layer to perform a KMeans clustering. We implemented the models used in all experiments with TensorFlow (Abadi et al., 2016) and used Adam optimizer (Kingma & Ba, 2014) to train the them. 4.1 MNIST STROKES SEQUENCES EXPERIMENTS We performed experiments on the MNIST strokes sequences dataset de Jong (2016)2 to evaluate the proposed loss components the in case of an RNN model and time series data. This dataset contains pen strokes, automatically generated from the original MNIST dataset LeCun et al. (1998). Although the generated sequences do not always reflect a choice a human would made in order to write a digit, the strokes are consistent across the dataset. For this experiment, we split the examples into two groups: samples belonging to the classes from 0 to 4 were assigned to the first group, and samples belonging to the classes from 5 to 9 were assigned to the second group. The model is trained to predict the group of a given sample and does not have any access to the underlying classes. We used the model depicted in Figure 1 for this experiment. After the models were trained on the binary classification task, we used the output of the penultimate layer FC2 to perform the KMeans clustering and evaluated the quality of the produced clustering using the original class labels as ground truth assignments. Autoencoder experiments In order to investigate the influence of the proposed loss components in the autoencoder settings, we applied them to an autoencoder model that reconstructs the input sequences from the MNIST strokes sequences dataset. We did not use any label information during this experiments, and used the representation from the intermediate layer of the autoencoder to perform KMeans clustering. 4.2 CIFAR-10 EXPERIMENTS In order to evaluate the proposed loss components on a different type of model and data, we preformed experimented with the CIFAR-10 dataset Krizhevsky & Hinton (2009) using an CNN model. 2https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data As in the MNIST strokes sequences experiments, we split the examples in two groups: samples belonging to the classes “airplan”, “automobile”, “bird”, “cat”, and “deer” were assigned to the first group, and samples belonging to the classes “dog”, “frog”, “horse”, “ship”, “truck” were assigned to the second group. Note that this assignment is quite arbitrary as it simply reflects the order of the labels of the classes in the dataset (namely, the labels 0-4 for the first group and the labels 4-9 for the second group). All groups contain rather different types of objects, both natural and human-made. For these experiments, we used a CNN model based on the VGG-16 architecture (Simonyan & Zisserman, 2014), depicted on the Figure 4. We discarded the bottom fully connected and convolutional layers as, perhaps, they are too big for this dataset. Instead, we appended three convolutional layers to the output of pool3 layer with number of filters 256, 128 and 8 correspondingly. The first two layers use 3x3 convolutions, and the last layer uses 1x1 convolutions. After that, we pass the output through a fully-connected layer of size 15 (FC1), which produces the representations used in clustering, and a fully connected layer of size 1 (FC2) with the sigmoid activation function to produce a binary classification decision. 4.3 TEXT CLASSIFICATION EXPERIMENTS Finally, to prove a wide generalizability of the proposed loss components, we performed text classification experiments using an RNN model again, but on an entirely different type of data. Namely, we used the DBPedia ontology dataset dataset (Zhang et al., 2015), which contains titles and abstract of Wikipedia articles labeled by 14 ontology classes. Again, we split the samples into two groups and trained the model on the binary classification task. Classes “Company”, “EducationalInstitution”, “Artist”, “Athlete”, “OfficeHolder”, “MeanOfTransportation”, “Building” belong to the first group, and the classes “NaturalPlace”, “Village”, “Animal”, “Plant”, “Album”, “Film”, “WrittenWork” belong to the second group. As in subsection 4.1, we used the model depicted on Figure 1. 4.4 IMPLEMENTATION DETAILS Despite the fact the proposed loss components can be directly implemented using two nested for loops, such implementation will not be computationally efficient, as it will lead to a big computational graph operating on separate vectors without using full advantages of highly optimized parallel matrix computations on GPU. Therefore, it is desirable to have an efficient implementation that can use full advantage of modern GPUs. We have developed such an efficient implementation that significantly accelerates the computation of the loss component in return for a higher memory consumption by creating two matrices that contain all combinations of di and dj from the summations in the Equation 5 and performing the operations to calculate the loss on them. We have made our implementation for TensorFlow (Abadi et al., 2016) publicly available on GitHub3 alongside with aforementioned models from the subsection 4.1 and the subsection 4.2. It is worth noting that since the loss component Lsingle operates directly on the weights of the target layer, its computational complexity does not depend on the size of the batch. Instead, it depends on the size of that layer. In contrast, the Lmulti operates on the activations of the target layer on all samples in the batch, and its computational complexity depends on the number of samples in the batch. In practice, using the implementation described above, we were able to train models with batch size of 512 and higher without exhausting the GPU’s memory. 5 RESULTS AND DISCUSSION 5.1 QUANTITATIVE ANALYSIS We report the average of the Adjusted Mutual Information (AMImax) and Normalized Mutual Information (NMIsqrt) scores (Vinh et al., 2010) across three runs in Table 1. On the simplest MNIST strokes sequences dataset Lsingle outperforms all other methods, whereas on more challenging and complex datasets Lmilti works the best, probably due to its ability to influence the learned repre- 3http://github.com/placeholder/ sentations on all layers of the network behind the target layer. The proposed loss components also improves the quality of clustering in the autoencoder settings, although the gain is marginal. It is also important to note that in all our experiments accuracy of models was not affected in a harmful way when we applied the proposed loss components, or the effect was negligible (less than 0.5%). 5.2 QUALITATIVE ANALYSIS To examine the influence of the proposed loss components to the activations of the network, we plot the number of samples, belonging to different underlying classes on the x axis for which the neurons on the y axis were active the most in the binary classification task on Figure 2 and Figure 3 for on MNIST strokes sequences and CIFAR-10 datasets correspondingly. As we can see from these figures, during a regular training with the binary classification objective, without the proposed loss component the models tend to learn representations that is specific to the target binary label, even though the samples within one group come from different classes. The model learns to use mostly just two neurons to discriminate between the target groups and hardly uses the rest of the neurons in the layer. We observe this behaviour across different types of models and datasets: an RNN model applied to a timeseries dataset and an CNN model applied to an image classification dataset behave in the exactly the same way. Both proposed loss components Lsingle and Lmulti force the model to produce diverged representations, and we can see how it changes the patterns of activations in the target layer. It is easy to observe in Figure 2b that the patterns of activations learned by the networks roughly correspond to underlying classes, despite the fact that the network did not have access to them during the training. This pattern is not as easy to see in case of CIFAR-10 dataset (see the Figure 3b), but we can observe that the proposed loss component nevertheless forced the network to activate different neurons for different classes, leading to a better AMI score on the clustering task. In order to further investigate the representations learned by the model, we visualized the representations of samples from the MNIST strokes sequences dataset in Figure 5 using TensorBoard. Figure 5a and Figure 5b in the top row depict the representations learned by the baseline model, colored according to the binary label and the underlying classes, respectively. Figure 5c and Figure 5d in the bottom row depict the representations of the same samples, learned by the model with the Lmulti loss component, colored in the same way. It is easy to see that the Lmulti indeed forced the model to learn disentangled representations of the input data. Note how the baseline model learned dense clusters of objects, with samples from the same group (but different classes) compactly packed in the same area. In contrast, the model with the proposed loss component learned considerably better representations which disentangle samples belonging to different classes and placed the them more uniformly in the space. In the real world, the number of clusters is rarely known beforehand. To systematically examine the stability of the proposed loss component, we plotted the Adjusted Mutual Information scores for the baselines methods andLmulti loss component with respect to the number of clusters in Figure 6, using the CIFAR-10 dataset. As can be seen from Figure 6, our loss component consistently outperforms the previously proposed methods regardless the number of clusters. 6 CONCLUSION In this paper, we propose two novel loss components that substantially improve the quality of KMeans clustering, which uses representations of the input data learned by a given model. We performed a comprehensive set of experiments using two popular neural network architectures (RNNs and CNNs), and different modalities of data (image and text). Our results demonstrate that the proposed loss components consistently increase the Mutual Information scores by a significant margin, and outperform previously proposed methods. In addition, we qualitatively analyzed the representations learned by the network by visualizing the activation patterns and relative positions of the samples in the learned space, showing that the proposed loss components indeed force the network to learn diverged representations.
1. What are the proposed regularizers' strengths and weaknesses? 2. How do the proposed regularizers encourage dissimilar weights and activations across samples? 3. Why convert weight vectors and ReLU activations into probability distributions using softmax? 4. Can simpler alternatives like orthogonality regularization achieve similar results? 5. How does the paper compare to previous work on clustering and representation learning? 6. Is there a natural interpretation of weight vectors as probability distributions? 7. What motivated the choice of measuring distance using KL divergence? 8. How does the model compare to other baseline methods in terms of performance and advantages? 9. What is the significance of producing separable representations, and how does the paper contribute to this topic? 10. Are there any limitations or areas for improvement in the proposed approach?
Review
Review Summary This paper proposes two regularizers that are intended to make the representations learned in the penultimate layer of a classifier more conforming to inherent structure in the data, rather than just the class structure enforced by the classifier. One regularizer encourages the weights feeding into the penultimate layer to be dissimilar and the other encourages the activations across samples (even if they belong to the same class) to be dissimilar. Pros - The proposed regularizers are able to separate out the classes inherent in the data, even if this information is not provided through class labels. This is validated on several datasets using visualizations as well as quantitative metrics based on mutual information. Cons - It is not explained why it makes sense to first convert the weight vectors into probability distributions by applying the softmax function, and then measuring distances using KL divergence between the probability distributions. It should be explained more clearly if there is there a natural interpretation of the weight vectors as probability distributions. Otherwise it is not obvious why the distance between the weight vectors is measured the way it is. - Similarly, the ReLU activations are also first converted into probability distributions by applying a softmax. It should be explained why the model does this, as opposed to simply using dot products to measure similarity. - The model is not compared to simpler alternatives such as adding an orthogonality regularization on the weights, i.e., computing W^TW and making the diagonals close to 1 and all other terms 0. Similar regularizers can be applied for activation vectors as well. - The objective of this paper seems to be to produce representations that are easy to separate into clusters. This topic has a wealth of previous work. Of particular relevance are methods such as t-SNE [1], parametric t-SNE [2], and DEC [3]. The losses introduced in this paper are fairly straight-forward. Therefore it would be good to compare to these baselines to show that a simple loss function is sufficient to achieve the objective. - Disentangling usually refers to disentangling factors of variation, for example, lighting, pose, and object identity which affect the appearance of a data point. This is different from separability, which is the property of a representation that makes the presence of clusters evident. This paper seems to be about learning separable representations, whereas the title suggests that it is about disentangled ones. Quality The design choices made in the paper (such as the choice of distance function) is not well explained. Also, given that the modifications introduced are quite simple, it can be improved by doing more thorough comparisons to other baselines. Clarity The paper is easy to follow. Originality The novel aspect of the paper is the way distance is measured by converting the weights (and activations) to probability distributions and using KL divergence to measure distance. However, it is not explained what motivated this choice. Significance The objective of this model is to produce representations that are separable, which is of general interest. However, given the wealth of previous work done in clustering, this paper would only be impactful if it compares to other hard baselines and shows clear advantages. [1] van der Maaten, Laurens and Hinton, Geoffrey. Visualizing data using t-SNE. JMLR, 2008. [2] van der Maaten, Laurens. Learning a parametric embedding by preserving local structure. In International Conference on Artificial Intelligence and Statistics, 2009. [3] Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. ICML 2016.
ICLR
Title Forced Apart: Discovering Disentangled Representations Without Exhaustive Labels Abstract Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods. 1 INTRODUCTION Representation learning is an important part of deep learning research, and the ability of deep neural networks to transform the input data into a space that is more suitable to the target task is one of the key reasons for their success. Consider the case of binary classification with a neural network with sigmoid activation function on the last layer, where a network transforms the input data x ∈ Rn into a space R where two classes are linearly separable by applying a sequence of non-linear transformations f(x) : Rn → Rk1 → Rk2 → · · · → Rkj → R Note that all representations, learned by the network in the sequence of transformations Ri → Rj , are devoted to one goal: binary classification. The learned intermediate representations can easily be used in tasks similar to the binary classification, but using them in a different task may be problematic. Consider the case of multivariate time series classification with an RNN model, depicted in Figure 1 with a sigmoid activation function in the last FC2 layer and a ReLU activation function in the layer FC1. Note that ReLU activation produces non-negative vectors. During a regular training procedure with binary cross-entropy loss, the model will learn weights that produce two patterns of activation of the layer FC1: roughly orthogonal vectors for the samples that belong to different classes, and roughly parallel vectors for the samples that belong to the same class. Indeed, the value of the output scalar is the result of taking the dot product between the weights w of the final layer FC2 (a single vector in this case) and the output h of the penultimate hidden layer FC1. Via the geometric interpretation of the dot product, this value is highest when the cosine between the vectors 1, and minimized when the cosine is −1. However, since the penultimate layer has the ReLU activation, the vectors cannot point in opposite directions, therefore, they must be orthogonal. maxwTh = max‖w‖‖h‖ cos θ ⇒ θ = 0 (1) minwTh,h ≥ 0 = min‖w‖‖h‖ cos θ ⇒ θ = π 2 (2) hi||hj , if yi = yj (3) hi⊥hi, if yi 6= yj (4) where yi is the corresponding binary label for hidden state hi. In this work, we focus on learning a better representation of the input that could be used in downstream tasks such as clustering. Specifically, we are interested in learning the representation that would enable clustering by virtue of revealing its latent structure, while using the limited information provided by the binary classification task. In order to force the network to learn such diverged representations, we propose two novel loss components that can be applied to an arbitrary cost function and work in both weakly-supervised and unsupervised settings. We evaluate the proposed loss components empirically on two most common types of models, Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) and different types of input data (time series, images, texts). Our approach shows consistent improvement of the quality of KMeans clustering in terms of mutual information scores, outperforming previous methods. 2 RELATED WORK In the past few years, a substantial amount of work has been dedicated to learning a better representation of the input data that can be either used in downstream tasks, such as KMeans clustering, or to improve generalizability or performance of the model. In general, these works can be divided into three categories: (1) approaches that introduce a new loss component that can be easily applied to an arbitrary cost function (discriminative models), (2) approaches that require a complicated or cumbersome training procedure (discriminative models), and (3) probabilistic generative and/or adversarial models. Approaches from the first group propose new loss components that can be applied in a straightforward manner to an arbitrary cost function, supervised or unsupervised. Cheung et al. (2014) proposed a cross-covariance penalty (XCov) to force the network to produce representations with disentangled factors. The proposed penalty is, essentially, cross-covariance between the predicted labels and the activations of samples in a batch. Their experiments showed that the network can produce a representation, with components that are responsible to different characteristics of the input data. For example, in case of the MNIST dataset, there was a class-invariant factor that was responsible for the style of the digit, and in case of the Toronto Faces Dataset (Susskind et al., 2010), there was a factor responsible for the subject’s identity. Similarly, but with a different goal in mind, Cogswell et al. (2015) proposed a new regularizer (DeCov), that minimizes cross-covariance of hidden activations, leading to non-redundant representations and, consequently, less overfitting and better generalization. DeCov loss is trying to minimize the Frobenius norm of the covariance matrix between all pairs of activations in the given layer. The authors’ experiments showed that the proposed loss significantly reduced overfitting and led to a better classification performance on a variety of datasets. The second group of methods requires a modification of the standard training procedure with backpropagation and stochastic gradient descent optimizers. Liao et al. (2016) proposed a method to learn parsimonious representations. Essentially, the proposed algorithm iteratively calculates cluster centroids, which are updated every M iterations and used in the cost function. The authors’ experiments showed that such algorithm leads to a better generalization and a higher test performance of the model in case of supervised learning, as well as unsupervised and even zero-shot learning. Similarly, Xie et al. (2016) proposed an iterative algorithm that first calculates soft cluster assignments, then updates the weights of the network and cluster centroids. This process is repeated until convergence. In contrast to Liao et al. (2016), the authors specifically focused on the task of learning better representations for clustering, and showed that the proposed algorithm gives a significant improvement in clustering accuracy. Finally, a new group of recently emerged methods focus on disentangling the factors of variation (e.g., style and class). Kingma et al. (2014) proposed deep generative models for semi-supervised learning and showed that is possible to generate samples from the target class with variations in style, and vice versa. Makhzani et al. (2015) proposed a new approach, called adversarial autoencoder (AAE) and performed a variety of experiments, including semi-supervised and unsupervised clustering, achieving impressive results on MNIST (LeCun et al., 1998) and Street View House Numbers (Netzer et al., 2011) datasets. However, since this methods includes adversarial networks, the training of such systems is rather cumbersome. For example, in the semi-supervised autoencoders experiments, the training of the system consisted of three different phases: a reconstruction phase, a regularization phase, and a semi-supervised classification phase, where the regularization phase itself consists of two sub-phases of updating discriminator and generator respectively. Finally, (Mathieu et al., 2016) proposed a conditional generative model that is a combination of Variational Autoencoder (Kingma & Welling, 2013) and Generative Adversarial Networks (Goodfellow et al., 2014) for disentangling factors of variations. Our proposed loss components belong to the first group and, in contrast to the other methods do not require a complicated training procedure, can easily be used with any cost function, and work in both weakly-supervised and unsupervised settings. 3 THE PROPOSED METHOD Inspired by Equation 1 and the work of Cheung et al. (2014) and Cogswell et al. (2015), we propose two novel loss components, which despite their simplicity, significantly improve the quality of the clustering over the representations produced by the model. The first loss component Lsingle works on a single layer and does not affect the other layers in the network, which may be a desirable behaviour in some cases. The second loss component Lmulti affects the entire network behind the target layer and forces it to produce disentangled representations in more complex and deep networks in which the first loss may not give the desired improvements. 3.1 SINGLE LAYER LOSS Consider the model in Figure 1. The layer FC2 has output size of 1 and produces a binary classification decision. The output of the layer FC1 is used to perform KMeans clustering. Recall from the example in the introduction that we want to force the model to produce divergent representations for the samples that belong to the same class, but are in fact substantively different from each other. One way to do it would be to force the rows of the weight matrix WFC1 of the FC1 layer to be different from each other, leading to different patterns of activations in the output of the FC1 layer. Formally, it can be expressed as follows: Lsingle = k∑ i=1 k∑ j=i+1 fl(di, dj) + fl(dj , di) (5) where dk are normalized weights of the row k of the weights matrix W of the given layer: dk = softmax(W [k]) (6) and fl(di, dj) is a component of the loss between the rows i and j: fl(xi, xj) = max(0,m−DKL(xi||xj)) (7) wherem is a hyperparameter that defines the desired margin of the loss component andDKL(di||dj) is the Kullback–Leibler divergence1 between the probability distributions di and dj . 3.2 MULTILAYER LOSS Note that the loss component Lsingle affects only the weights of the specific layer, as it operates not on the outputs of the layer but directly on its weights, similar to, for example, `2 regularization. Therefore, this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data. This might be the case in simple shallow networks, but in case of very deep complex networks the input data is nonlinearly transformed so many times that only the information that is needed for binary classification left, and all the remaining latent characteristics of the input data were lost as not important for binary classification (see the Figure 3a). Indeed, as we can see from the experiments in Section 4, the loss component described above substantially improves the quality of clustering in a simple baseline case. However, in the case of a more complex model, this improvement is much less impressive. Therefore, we also propose a loss component that can influence not only one specific layer, but all layers before it, in order to force the network to produce a better representation. Recall again that we want to force the model to produce disentangled representations of the input data. Namely, that these representations should be sufficiently different from each other even if two 1Note that the proposed framework does not limit the choice of divergence measure between the two distributions, for example, the Jensen-Shannon divergence can be used, etc. samples have the same label. We propose the following loss component in order to produce such properties: Lmulti = 1 N2s N∑ i=1 N∑ j=1 { fl(h s i , h s j) + fl(h s j , h s i ) yi = yj 0 yi 6= yj (8) where hsk is a normalized output of the target layer h for the sample k: hsk = softmax(hk) (9) yk is its the ground truth label, N is the number of samples in the batch, Ns is number of samples that have the same label, and fl(hi, hj) is the function defined in Equation 7. Note that this loss component Lmulti works on the outputs of the target layer, and therefore, it affects the whole network behind the layer on which it is applied, overcoming the local properties of the Lsingle loss. 3.3 UNSUPERVISED LEARNING Although our main focus in the presented experiments is on a binary classification task, both of our proposed loss components can be used in unsupervised learning as well. The loss component Lsingle does not require any labels so it can be used without modifications. The loss component Lmulti can be applied to unlabeled data by just taking the summations without consideration of labels of the samples as follows: Lmulti2 = 1 N2 N∑ i=1 N∑ j=1 fl(h s i , h s j) + fl(h s j , h s i ) (10) For example, as autoencoder models are a common choice to learn representations to use in a downstream task, the proposed loss components can be easily applied to its cost function as follows: Lae = (1− α) ∗ 1 N N∑ i=1 ||Xi − X̂i||2 + α ∗ Lmulti (11) where the first part is a standard reconstruction cost for autoencoder, the second is the proposed loss component, and α is a hyperparameter reflecting how much importance is given to it. 3.4 THE MARGIN HYPERPARAMETER m One important choice to be made while using the proposed loss components is the value of the margin hyperparameter m. A larger value of m corresponds to a larger margin between the rows of the weights matrix in case of Lsingle and a larger margin between the activations of the target layer in case of Lmulti. The smaller the value of m, the less influence the proposed loss components have. In our experiments, we found that the proposed loss component Lsingle is relatively stable with respect to the choice ofm, and generally performs better with larger values (in the range 5-10). In case of the loss component Lmulti, we found that even a small value of the marginm (0.1 - 1) disentangles the learned representations better and consequently leads to substantial improvements in the AMI score. In all of the reported experiments, we found that the proposed loss component with a reasonably chosen m does not hurt the model’s performance in the classification task. 4 EXPERIMENTS We performed an extensive set of experiments that covers the two most commonly used in modern research: Recurrent Neural Networks and Convolutional Neural Networks, as well as entirely different modalities of the input data: time series, images, and texts. In all experiments, we used an RNN or an CNN model without any additional loss components as the baseline and compared our proposed loss components Lsingle and Lmulti with the DeCov regularizer (Cogswell et al., 2015) and XCov penalty (Cheung et al., 2014), as those works are most similar to ours. After the model were trained on the binary classification task, we use the output of the penultimate layer to perform a KMeans clustering. We implemented the models used in all experiments with TensorFlow (Abadi et al., 2016) and used Adam optimizer (Kingma & Ba, 2014) to train the them. 4.1 MNIST STROKES SEQUENCES EXPERIMENTS We performed experiments on the MNIST strokes sequences dataset de Jong (2016)2 to evaluate the proposed loss components the in case of an RNN model and time series data. This dataset contains pen strokes, automatically generated from the original MNIST dataset LeCun et al. (1998). Although the generated sequences do not always reflect a choice a human would made in order to write a digit, the strokes are consistent across the dataset. For this experiment, we split the examples into two groups: samples belonging to the classes from 0 to 4 were assigned to the first group, and samples belonging to the classes from 5 to 9 were assigned to the second group. The model is trained to predict the group of a given sample and does not have any access to the underlying classes. We used the model depicted in Figure 1 for this experiment. After the models were trained on the binary classification task, we used the output of the penultimate layer FC2 to perform the KMeans clustering and evaluated the quality of the produced clustering using the original class labels as ground truth assignments. Autoencoder experiments In order to investigate the influence of the proposed loss components in the autoencoder settings, we applied them to an autoencoder model that reconstructs the input sequences from the MNIST strokes sequences dataset. We did not use any label information during this experiments, and used the representation from the intermediate layer of the autoencoder to perform KMeans clustering. 4.2 CIFAR-10 EXPERIMENTS In order to evaluate the proposed loss components on a different type of model and data, we preformed experimented with the CIFAR-10 dataset Krizhevsky & Hinton (2009) using an CNN model. 2https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data As in the MNIST strokes sequences experiments, we split the examples in two groups: samples belonging to the classes “airplan”, “automobile”, “bird”, “cat”, and “deer” were assigned to the first group, and samples belonging to the classes “dog”, “frog”, “horse”, “ship”, “truck” were assigned to the second group. Note that this assignment is quite arbitrary as it simply reflects the order of the labels of the classes in the dataset (namely, the labels 0-4 for the first group and the labels 4-9 for the second group). All groups contain rather different types of objects, both natural and human-made. For these experiments, we used a CNN model based on the VGG-16 architecture (Simonyan & Zisserman, 2014), depicted on the Figure 4. We discarded the bottom fully connected and convolutional layers as, perhaps, they are too big for this dataset. Instead, we appended three convolutional layers to the output of pool3 layer with number of filters 256, 128 and 8 correspondingly. The first two layers use 3x3 convolutions, and the last layer uses 1x1 convolutions. After that, we pass the output through a fully-connected layer of size 15 (FC1), which produces the representations used in clustering, and a fully connected layer of size 1 (FC2) with the sigmoid activation function to produce a binary classification decision. 4.3 TEXT CLASSIFICATION EXPERIMENTS Finally, to prove a wide generalizability of the proposed loss components, we performed text classification experiments using an RNN model again, but on an entirely different type of data. Namely, we used the DBPedia ontology dataset dataset (Zhang et al., 2015), which contains titles and abstract of Wikipedia articles labeled by 14 ontology classes. Again, we split the samples into two groups and trained the model on the binary classification task. Classes “Company”, “EducationalInstitution”, “Artist”, “Athlete”, “OfficeHolder”, “MeanOfTransportation”, “Building” belong to the first group, and the classes “NaturalPlace”, “Village”, “Animal”, “Plant”, “Album”, “Film”, “WrittenWork” belong to the second group. As in subsection 4.1, we used the model depicted on Figure 1. 4.4 IMPLEMENTATION DETAILS Despite the fact the proposed loss components can be directly implemented using two nested for loops, such implementation will not be computationally efficient, as it will lead to a big computational graph operating on separate vectors without using full advantages of highly optimized parallel matrix computations on GPU. Therefore, it is desirable to have an efficient implementation that can use full advantage of modern GPUs. We have developed such an efficient implementation that significantly accelerates the computation of the loss component in return for a higher memory consumption by creating two matrices that contain all combinations of di and dj from the summations in the Equation 5 and performing the operations to calculate the loss on them. We have made our implementation for TensorFlow (Abadi et al., 2016) publicly available on GitHub3 alongside with aforementioned models from the subsection 4.1 and the subsection 4.2. It is worth noting that since the loss component Lsingle operates directly on the weights of the target layer, its computational complexity does not depend on the size of the batch. Instead, it depends on the size of that layer. In contrast, the Lmulti operates on the activations of the target layer on all samples in the batch, and its computational complexity depends on the number of samples in the batch. In practice, using the implementation described above, we were able to train models with batch size of 512 and higher without exhausting the GPU’s memory. 5 RESULTS AND DISCUSSION 5.1 QUANTITATIVE ANALYSIS We report the average of the Adjusted Mutual Information (AMImax) and Normalized Mutual Information (NMIsqrt) scores (Vinh et al., 2010) across three runs in Table 1. On the simplest MNIST strokes sequences dataset Lsingle outperforms all other methods, whereas on more challenging and complex datasets Lmilti works the best, probably due to its ability to influence the learned repre- 3http://github.com/placeholder/ sentations on all layers of the network behind the target layer. The proposed loss components also improves the quality of clustering in the autoencoder settings, although the gain is marginal. It is also important to note that in all our experiments accuracy of models was not affected in a harmful way when we applied the proposed loss components, or the effect was negligible (less than 0.5%). 5.2 QUALITATIVE ANALYSIS To examine the influence of the proposed loss components to the activations of the network, we plot the number of samples, belonging to different underlying classes on the x axis for which the neurons on the y axis were active the most in the binary classification task on Figure 2 and Figure 3 for on MNIST strokes sequences and CIFAR-10 datasets correspondingly. As we can see from these figures, during a regular training with the binary classification objective, without the proposed loss component the models tend to learn representations that is specific to the target binary label, even though the samples within one group come from different classes. The model learns to use mostly just two neurons to discriminate between the target groups and hardly uses the rest of the neurons in the layer. We observe this behaviour across different types of models and datasets: an RNN model applied to a timeseries dataset and an CNN model applied to an image classification dataset behave in the exactly the same way. Both proposed loss components Lsingle and Lmulti force the model to produce diverged representations, and we can see how it changes the patterns of activations in the target layer. It is easy to observe in Figure 2b that the patterns of activations learned by the networks roughly correspond to underlying classes, despite the fact that the network did not have access to them during the training. This pattern is not as easy to see in case of CIFAR-10 dataset (see the Figure 3b), but we can observe that the proposed loss component nevertheless forced the network to activate different neurons for different classes, leading to a better AMI score on the clustering task. In order to further investigate the representations learned by the model, we visualized the representations of samples from the MNIST strokes sequences dataset in Figure 5 using TensorBoard. Figure 5a and Figure 5b in the top row depict the representations learned by the baseline model, colored according to the binary label and the underlying classes, respectively. Figure 5c and Figure 5d in the bottom row depict the representations of the same samples, learned by the model with the Lmulti loss component, colored in the same way. It is easy to see that the Lmulti indeed forced the model to learn disentangled representations of the input data. Note how the baseline model learned dense clusters of objects, with samples from the same group (but different classes) compactly packed in the same area. In contrast, the model with the proposed loss component learned considerably better representations which disentangle samples belonging to different classes and placed the them more uniformly in the space. In the real world, the number of clusters is rarely known beforehand. To systematically examine the stability of the proposed loss component, we plotted the Adjusted Mutual Information scores for the baselines methods andLmulti loss component with respect to the number of clusters in Figure 6, using the CIFAR-10 dataset. As can be seen from Figure 6, our loss component consistently outperforms the previously proposed methods regardless the number of clusters. 6 CONCLUSION In this paper, we propose two novel loss components that substantially improve the quality of KMeans clustering, which uses representations of the input data learned by a given model. We performed a comprehensive set of experiments using two popular neural network architectures (RNNs and CNNs), and different modalities of data (image and text). Our results demonstrate that the proposed loss components consistently increase the Mutual Information scores by a significant margin, and outperform previously proposed methods. In addition, we qualitatively analyzed the representations learned by the network by visualizing the activation patterns and relative positions of the samples in the learned space, showing that the proposed loss components indeed force the network to learn diverged representations.
1. What is the main contribution of the paper regarding neural network representations for clustering tasks? 2. What are the issues with the introduction section of the paper? 3. What are the problems with the explanation of the method in the paper? 4. What are the evaluation metrics used in the experiments, and how are they calculated? 5. Can you provide more details on the specific neural network architecture discussed in the introduction?
Review
Review The paper proposes techniques for encouraging neural network representations to be more useful for clustering tasks. The paper contains some interesting experimental results, but unfortunately lacks concise motivation and description of the method and quality of writing. Introduction: The introduction is supposed to present the problem and the 'chain of events' that led to this present work, but does not do that. The first paragraph contains a too length explanation that in a classification task, representations are only concerned about being helpful for this task, and not any other task. The paragraph starting with 'Consider the case...', describes in detailed some specific neural network architecture, and what will happen in this architecture during training. The main problem with this paragraph is that it does not belong in the introduction. Indeed, other parts of the introduction have no relation to this paragraph, and the first part of the text that related to this paragraph appears suddenly in Section 3. The fact that this paragraph is two thirds of the introduction text, this is very peculiar. Furthermore, the introduction does not present the problem well: 1) What does is a better representation for a clustering task? 2) Why is that important? Method: There are a few problematic statements in this part: "The first loss component L_single works on a single layer and does not affect the other layers in the network". This is not exactly true, because it affect the layer it's related to, which affect upper layers through their feedforward input or bottom layer through the backward pass. "Recall from the example in the introduction that we want to force the model to produce divergent representations for the samples that belong to the same class, but are in fact substantively different from each other". It is not clear why this is a corollary of the example in the introduction (that should be moved to the method part). "this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data". What does this mean? The representation always contains such information, that is relevant to the task at hand... And others. The main problem is that the work is poorly explained: starting from the task at hand, through the intuition behind the idea how to solve it. The experiments parts contains results that show that the proposed method is superior by a substantial margin over the baseline approaches. However, the evaluation metrics and procedure are poorly explained; What are Adjusted Mutual Information (AMI) and Normalized Mutual Information (NMI)? How are they calculated? Or at least, the mutual information between what and what are they measuring?
ICLR
Title Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization Abstract As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to stateof-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets. 1 INTRODUCTION Deep Learning (DL) is the backbone of Artificial Intelligence as a Service (AIaaS) (Ribeiro et al., 2015), which is being provided in a wide range of applications including music composition (Briot et al., 2020), autonomous driving (Li et al., 2021a), smart building (Xu et al., 2020a), etc. However, a good model can be expensive to obtain: it often requires dedicated architecture design (He et al., 2016), a large amount of high-quality data (Deng et al., 2009), lengthy training on professional devices (Zoph & Le, 2016), and expert tuning (Zhang et al., 2019). Thus, well-trained DL models are valuable intellectual property (IP) to the model owners and need protection. Generally speaking, there are two aspects in protecting an IP in AIaaS, verifying who owns the model and authorizing how the model can be used. These two aspects led to the development of two types of protection techniques: ownership verification and usage authorization. For ownership verification, prior works proposed approaches such as embedding watermarks into network parameters (Song et al., 2017), learning special behaviors for pre-defined triggers (Fan et al., 2019), and extracting fingerprints from the model (Le Merrer et al., 2020). However, they are vulnerable to state-of-art watermark removal approaches that are based on model fine-tuning or retraining (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). For model usage authorization, most prior works were built on encrypting neural network parameters with a secret key (Alam et al., 2020; Chakraborty et al., 2020) and ensuring that models can only be used by users with this key. However, authorized users may use the model on any data without restriction. We believe that for comprehensive IP protection, the goal of usage authorization is not *These authors contributed equally to this work. Data space Reduced model generalization bound Source domain Source-Only NTL Source domain Outside of source domain Data space Target Target-Specified NTL Source domain Generalization bound Data space Supervised Learning Figure 1: A visualization of the generalization bound trained with different approaches. The left figure shows Supervised Learning in the source domain, which can derive a wide generalization area. When Target-Specified NTL is applied (middle), the target domain is removed from the generalization area. As for Source-Only NTL (right), the generalization area is significantly reduced. only who is allowed to use the model, but also what data can the model be used on. We thus consider a new data-centric aspect of usage authorization in this work, i.e., authorizing models to certain data for preventing their usage on unauthorized data. We call this applicability authorization. Note that applicability authorization goes far beyond IP protection. It can also be viewed as a way to “control” how machine learning models are used in general. One example would be a company (e.g., Meta) trains a recommendation system from adult data and uses applicability authorization to prevent this system from being used by teenagers. Our Approach and Contribution. In this work, we propose Non-Transferable Learning (NTL), a novel approach that can robustly verify the model ownership and authorize the model applicability on certain data. Intuitively, NTL goes against the current research trend of improving the generalization ability of models across various domains, e.g., domain generalization and adaptation (Zhou et al., 2020; Dong et al., 2020). Instead, NTL tries to make the generalization bound of DL models more explicit and narrower, by optimizing the model to learn domain-dependent features and thereby making the model exclusive to certain domains. More specifically, we consider two domains: the source domain where we want the models to perform well, and the auxiliary domain where we aim to degrade the model performance. And if the model trained with NTL is applied to a target domain similar to the auxiliary one, the performance should also be poor. As shown in Figure 1, we have developed two types of NTL approaches: Target-Specified NTL and Source-Only NTL. • Target-Specified NTL assumes that the source and target domains are both known. We then treat the target domain as the auxiliary domain and enlarge the distance of representations between the source and auxiliary domains. Target-Specified NTL can be used to verify the model ownership by triggering misclassification. While previous model watermarks can often be easily removed because the model memorization of such watermarks encounters catastrophic forgetting (Kemker et al., 2018) during watermark removal, our NTL-based verification is resistant to state-of-art watermark removal approaches, because the misclassification behavior is dependent on the overall target-private features that have little correlation with the source-private features for the main task. • In Source-Only NTL, the target domain is unknown and thus our approach relies solely on the source domain, aiming to degrade the performance in all other domains. In this case, NTL generates the auxiliary domain from a novel generative adversarial augmentation framework and then increases the representation distance. Source-Only NTL can provide authorization to certain data rather than particular users or devices, by degrading the model performance on all other data domains other than the source domain. This provides data-centric applicability authorization, with which we can also prevent unauthorized model usage that are caused by the secret key leakage and cannot be addressed by prior model authorization methods. In addition to proposing the novel concept of NTL and developing its two approaches, we are also able to experimentally validate their effectiveness. We conducted extensive experiments on 5 digit sets, CIFAR10 & STL10 and VisDA. For target-specified cases, we demonstrate how to apply NTL for model ownership verification. Our experiments show that the state-of-art model watermark removal methods are ineffective on NTL-based ownership verification. For source-only NTL, our experiments demonstrate its effectiveness in authorizing model applicability to certain data. 2 RELATED WORK Domain Generalization & Adaptation (DG & DA). DG aims to generalize learning models with available source domains to unseen target domains (Blanchard et al., 2011). A number of methods have been proposed for domain discrepancy minimization (Li et al., 2020), adversarial training (Rahman et al., 2020; Zhao et al., 2020c), invariance representation learning (Zhou et al., 2020; Piratla et al., 2020), etc. Recently, there is significant interest on conducting DG with one source domain only, for which well-crafted data augmentation approaches (Qiao et al., 2020; Zhao et al., 2020b; Li et al., 2021b; Xu et al., 2020b) have been proposed to expand the input space. DA is also related to improving the generalization ability of models across domains (Ahmed et al., 2021), and while DA can access the target data, DG has no access to any target sample (Xu et al., 2021; Dong et al., 2021). Unlike DG or DA, we try to weaken the generalization ability of models by expanding the distance between representations of different domains. Our method works effectively for both the target-specified and the source-only cases with a novel adversarial augmentation framework. Intellectual Property (IP) Protection for Deep Learning (DL). While DL has shown its unparalleled advantages in various applications, there are significant challenges in protecting DL models. For instance, Inference Attack (Shokri et al., 2017; Wang et al., 2019) can steal private information about the target DL model. Model Inversion Attack (He et al., 2019; Salem et al., 2020) is able to recover the input data via an analysis of the model prediction. These two types of attacks directly threaten the privacy of model users, while there are also many active attacks (Suciu et al., 2018; Yao et al., 2019) that lead DL models to produce abnormal behaviors. In addition, verifying model ownership and authorizing model usage have become important issues with the development of AIaaS. There have been a number of watermarking approaches addressing the verification of model ownership. For instance, Zhang et al. (2018) and Li et al. (2019) train a neural network on the original datasets and the watermarked one assigned with a particular label, which makes the model behave abnormally when it encounters watermarked data. Song et al. (2017) and Uchida et al. (2017) inject a pattern that is similar to regular photograph watermarks (Cheng et al., 2021) into the least significant bits of the model parameters and provide the corresponding decoding methods. Le Merrer et al. (2020) and Zhao et al. (2020a) make use of adversarial examples to extract fingerprints from learned neural networks without accessing network weights. Compared to these approaches, our NTL can achieve model ownership verification by triggering universal misclassification. Moreover, with extensive experiments, we also demonstrate that state-of-art model watermark removal methods, e.g., FTAL and RTAL (Adi et al., 2018), EWC and AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018) are not effective to NTL-based verification. Model usage authorization is another aspect in protecting model intellectual property. For instance, Alam et al. (2020) encrypt every network parameter with a secret key. Chakraborty et al. (2020) generate a secret key from hardware fingerprints of a particular device, and require that only users who possess this device can load and employ the model. Different from these methods, our NTL focuses on providing data-centric protection via applicability authorization, which retains good model performance on authorized data while degrading model performance for other data domains. To the best of our knowledge, this is the first work that prevents model usage on unauthorized data via model learning. 3 METHODOLOGY In this section, we introduce our NTL approach. Section 3.1 presents the inspiration and the design of the optimization objective of NTL, which is the core for both target-specified and source-only cases. Section 3.2 presents the generative augmentation framework for source-only cases. Our method is based on the concept of generative adversarial networks (GAN), however our goal is not to propose a new GAN but to design an effective augmentation method in the context of NTL. Section 3.3 introduces the application of NTL on ownership verification and applicability authorization. 3.1 NON-TRANSFERABLE LEARNING WITH DISTANCE EXPANSION OF REPRESENTATION We consider a source domain with labeled samples S= {(x,y)∥x∼PSX ,y∼PSY }, where PX and PY are the input and label distributions, respectively. In this work, we use image classification as the learning task with K possible classes, in which case x and y are matrix-valued and scalar random variables, respectively. In addition, we consider an auxiliary domain A={(x,y)∥x∼PAX ,y∼PAY }. The source domain S and the auxiliary domain A will be fed into a deep neural network, and without loss of generality, we split the neural network into two parts, one is a feature extractor Φ on the bottom, and the other is a classifier Ω on the top. Inspiration from Information Bottleneck. Our NTL, in particular the design of optimization objective, is inspired by the analysis of Information Bottleneck (IB) (Tishby et al., 2000). Let us start by introducing Shannon Mutual Information (SMI). In addition to input x and label y, we also regard representation z extracted by Φ as a random variable. The SMI between two random variables, e.g., between z and x, is defined as I(z;x)=Ex∼PX [DKL(P(z|x)∥P(z))], where DKL(·) represents the Kullback-Leible (KL) divergence and P(·) is the distribution. In IB theory, considering the effectiveness, privacy and generalization, an optimal representation has three properties (Achille & Soatto, 2018): (1) Sufficiency: label y sufficiently differentiates representation z, i.e., I(z;y)=I(x;y); (2) Minimality: z needs to represent as little information about input x as possible, i.e., min I(z;x); (3) Invariance: z is optimal, meaning that it does not overfit to spurious correlations between y and nuisance n embedded in x, i.e., I(z;n) = 0. IB theory assumes that nuisance n is a factor that affects input x, and it works with y together to determine what x looks like to some extent. For instance, in domain generalization, nuisance n can be regarded as a domain index that indicates which domain a certain sample comes from (Du et al., 2020). In our problem, different from the objective of the IB theory, NTL enforces the models to extract nuisance-dependent representations, which is opposite to the property of invariance. In other words, we aim to increase I(z;n), and we have the following proposition for achieving this aim. Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (1) The detailed proof for Proposition 1 is included in the Appendix. Optimization Objective Design. Proposition 1 provides guidance for maximizing I(z;n). First, unlike in the IB theory, we do not minimize I(z;x) for the minimality property. In addition, we try to minimize I(z;y|n) through the design of optimization objective that measures the error between the model prediction and the ground truth during the training of neural networks. Specifically, instead of using the typical CrossEntropy loss to measure the error, we apply KL divergence loss to direct the training, and we have the following theorem. Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. The detailed proof of Theorem 1 is provided in the Appendix. According to this theorem, I(z;y|n) can be minimized by increasing the KL divergence loss of training data conditioned on different n. However, as stated in Section 1, we aim to degrade the model performance in the auxiliary domain while maintaining good model performance in the source domain. Thus, we only minimize I(z;y|n) by increasing the KL divergence loss of the auxiliary domain data. In order to achieve this goal, we design a loss L∗ntl that shapes like a minus operation between KL divergence losses of the source and auxiliary domain (LS , LA), i.e., LS = Ex∼PSX [DKL(P(Ω(Φ(x)))∥P(y))] and LA=Ex∼PAX [DKL(P(Ω(Φ(x)))∥P(y))]. Specifically, this loss can be written as follows: L∗ntl = LS −min(β, α · LA) (2) Here, α is the scaling factor for LA (α = 0.1 in our experiments), and β is an upper bound when LA gets too large and dominates the overall loss (β=1.0 in experiments; please see the Appendix for more details about α and β). Moreover, if we use n = 0 and n = 1 to denote the source and auxiliary domain respectively, the optimization of Eq. (2) can guarantee the sufficiency property for the source domain: I(z;y|n=0)=I(x;y|n=0), and increasing LA decreases I(z;y|n=1). According to Proposition 1, we can move the upper bound of I(z;n) to a higher baseline via optimizing Eq. (2). However, such optimization might only make classifier Ω more sensitive to domain features and have little effect on feature extractor Φ. In this case, representations of different domains captured by Φ may still be similar, which conflicts with our intention to maximize I(z;n), and the performance of the target can be easily improved by fine-tuning or adapting Ω with a small number of labeled target samples. On the other hand, directly calculating I(z;n) and taking it as a part of the optimization objective are difficult, especially in the optimization of representation learning (Torkkola, 2003). Achille & Soatto (2018) apply binary classifier as the nuisance discriminator, and they can estimate I(z;n) after the model training via this discriminator. Here, we find another way to increase I(z;n) indirectly based on the following theorem. Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (3) We also employ a nuisance discriminator to observe the change of I(z;n) during training. The details of this discriminator design and the proof of Theorem 2 can be found in the Appendix. NTL Optimization Objective. Based on the above analysis, we design our NTL optimization objective to increase I(z;n) and extract nuisance-dependent representations. Specifically, we compute the MMD(P,Q; exp) between representations of the source and auxiliary domain data and maximize it. For stability concern, we also set an upper bound to the MMD(P,Q; exp). Then, the overall optimization objective of NTL with distance expansion of representation is shaped as follows: Lntl = LS −min(β, α ·LA ·Ldis), whereLdis = min(β′, α′ ·MMD(Px∼PS X (Φ(x)),Px∼PA X (Φ(x)); exp) (4) Here, α′ and β′ represent the scaling factor and upper bound of Ldis respectively (α′ = 0.1 and β′=1.0 in our experiments; please refer to the Appendix for more details about α′ and β′). Φ(·) is the feature extractor that outputs the corresponding representations of given inputs. When the target domain is known and accessible, it will be regarded as the auxiliary domain, and the above NTL with distance expansion of representation can be conducted directly on the source and auxiliary domains. We call such cases Target-Specified NTL. 3.2 SOURCE DOMAIN AUGMENTATION FOR SOURCE-ONLY NTL In practice, the target domain might be unknown or unavailable. For such cases, we develop a novel generative augmentation framework to generate an auxiliary domain and then leverage the above NTL process, in what we call Source-Only NTL. In the following, we will introduce our augmentation framework, which can generate data samples drawn from the neighborhood distribution of the source domain with different distances and directions to serve as the auxiliary data domain in NTL. GAN Design for Source Domain Augmentation. The overall architecture of our augmentation framework is shaped like a generative adversarial network (GAN) that is made up of a generator G and a discriminator D. G takes in normal noise and a label in form of one-hot, and then outputs a data sample. For D, if we feed D with a sample, it will tell whether this sample is fake or not and predict its label. The adversarial battle happens as G tries to generate data as real as possible to fool D, while D distinguishes whether the data being fed to it is real or not to the best of its ability. After sufficient period of such battle, the distributions of the generated data and the ground-truth data are too similar to tell apart (Li et al., 2017). Based on this principle, we utilize G to approximate the source domain. However, if we follow the standard GAN training, the trained GAN will not generate samples with deterministic labels. Therefore, we combine the intuitions of CGAN (Mirza & Osindero, 2014) and infoGAN (Chen et al., 2016) to propose a new training approach for our augmentation framework. In our approach, G uses MSE loss to compare its generated data and the real data. D consists of three modules: a feature extractor and two classifiers behind the extractor as two branches, where a binary classifier predicts whether the data is real or not, and a multiple classifier outputs the label. Note that these two classifiers both rely on the representations extracted by the feature extractor. For training D, we use MSE loss to evaluate its ability to distinguish real samples from fake ones, and KL divergence to quantify the performance of predicting labels for the real data. Finally, there is an additional training step for enforcing the GAN to generate samples of given labels by optimizing G and D simultaneously. The training losses of LG , LD and LG,D are: LD = Ex∼PS X ,y∼PS Y [ 1 2 (∥Db(x), 1∥2 + ∥Db(G(noise,y)), 0∥2) +DKL(P(Dm(x)) ∥P(y) ] LG = Ey∼PS Y [ ∥Db(G(noise,y)), 1∥2 ] , LG,D = Ey′∼PU Y [DKL(P(Dm(G(noise,y′))) ∥P(y′)] (5) Algorithm 1: Generative Adversarial Data Augmentation for Source-Only NTL Require: Source domain data S={(x,y)∥x ∼ PSX ,y ∼ PSY }; Generator G, discriminator D; List of augmentation distance DIS, the maximum augmentation direction DIR; GAN training epochs eGAN, augmentation training epochs eAUG; Initialize the auxiliary domain data A = [ ]; Output: The auxiliary domain data A = {(x,y)∥x ∼ PAX ,y ∼ PAY }; 1 for i = 1 to eGAN do 2 use (noise,y ∼ PSY ) to optimize G with LG , use S and G(noise,y ∼ PSY ) to optimize D with LD; 3 use (noise,y ∼ PUY ) to optimize G,D with LG,D; 4 for dis in DIS do 5 for dir to DIR do 6 for l in G do 7 interval = d(l) / DIR; // function d() acquires the dimension of inputs; 8 freeze D and l[0 : dir × interval]; // freeze D, and the first dir parts of l-th layer in G; 9 for i = 1 to eAUG do 10 use (noise,y ∼ PSY ) and S to optimize G with Laug; 11 A ∪ G(noise,y ∼ PUY ); // use G to generate augmentation data; Here, we use subscripts b and m to denote outputs from the binary classifier and the multiple classifier of D, respectively. The noise of Eq. (5) is drawn from Gaussian Noise Pg =N (0, 1), while y′ is drawn from the uniform distribution PUY with K equally likely possibilities. And y and y′ are the one-hot form vectors of y and y′, respectively. Augmentation with Different Distances. To generate the data of different distances to the source domain, we apply a Gaussian estimator to measure the MMD between distributions of the source and the generated data from G. However, providing that the MMD distance is optimized to increase with no restriction, the outcome will lose the semantic information, i.e., the essential feature for the main task. In order to preserve such semantic information, we use the CrossEntropy loss to add a restriction to the optimization objective. With this restriction, we set multiple upper bounds – DIS for generating data with different distances (we use DIS to denote a list of multiple dis-s with various lengths). The specific objective is as follows: Laug=−min { dis,MMD(Px∼PS X (Dz(x)),Py∼PS Y (Dz(G(noise,y))); exp) } + Ey∼PS Y DCE(Dm(G(noise,y)),y) (6) Here, subscript z denotes outputs from the feature extractor. For every dis, we freeze D and use Eq. (6) to optimize G. After the optimization, we can generate augmentation data via feeding G with normal noise and labels drawn from PUY . Augmentation with Different Directions. We also investigate how to generate data in different directions. The optimization of Gaussian MMD follows the direction of gradient, which is the fastest way to approach the objective. In such case, all augmented domains of different distances might follow the same direction, i.e., the direction of gradient. Therefore, in order to augment neighborhood domains with different directions, we need to introduce more restrictions to the optimization process. Specifically, for intermediate representations of G, we view each filter (neuron) as corresponding to a feature dimension of the representation. At the beginning of directional augmentation, we make multiple copies of the trained GAN in the last step (G and D), and we pick one GAN for each direction. If we want to augment the source in DIR directions, we will divide the overall network of G into DIR parts equally. For the augmentation of the first direction, the first part of G will be frozen and not updated during optimization. The second direction will be augmented by freezing the first two parts of the network and conducting the optimization. The third corresponds to the first three parts, and so on. Given a certain dis, we can optimize Eq. (6) to augment the source domain to DIR directions by freezing G gradually. The detained flow is shown in Algorithm 1. 3.3 APPLICATION OF NTL FOR MODEL INTELLECTUAL PROPERTY PROTECTION Ownership Verification. The proposed NTL can easily verify the ownership of a learning model by triggering misclassification. To achieve this, we can attach a certain trigger patch which shapes like a shallow mask for images on the source domain data as the auxiliary domain data, and then conduct NTL on these two domains. After that, the training model will perform poorly on the data with the patch but have good performance on the data without the patch. Such model behavior difference can be utilized to verify the ownership, which is similar to what regular backdoor-based model watermarking approaches do. In contrast, models trained with other methods often present nearly the same performance on the data with or without the patch, as the patch is shallow and light. Applicability Authorization. When applying NTL to authorize the model applicability, we aim to restrict the model generalization ability to only the authorized domain, where all the data is attached with a dedicated patch, and we need to make sure that the patch design will not impact the semantic information (note that the unforgeability and uniqueness of the patch are not the main consideration of this work, and we will explore it in the future). For simplicity, we use a shallow mask that is similar to that of the aforementioned ownership verification as the authorized patch. We first use our generative adversarial augmentation framework to generate neighborhood data of the original source domain. Then, we regard the source data with the patch attached as the source domain in NTL, and use the union of the original source data, the generated neighborhood data with and without the patch as the auxiliary domain. After the NTL training on these two domains, the learning model performs well only on the source domain data with the authorized patch attached, and exhibits low performance in other domains. In this way, we achieve the model applicability authorization. 4 EXPERIMENTAL RESULTS Our code is implemented in PyTorch (and provided in the Supplementary Materials). All experiments are conducted on a server running Ubuntu 18.04 LTS, equipped with NVIDIA TITAN RTX GPU. The datasets and experiment settings used are introduced below. Digits. MNIST (Deng, 2012) (MT) is the most popular digit dataset. USPS (Hull, 1994) (US) consists of digits that are scanned from the envelopes by the U.S. Postal Service. SVHN (Netzer et al., 2011) (SN) contains house number data selected from Google Street View images. MNISTM (Ganin et al., 2016) (MM) is made by combining MNIST with different backgrounds. Finally, SYN-D (Roy et al., 2018) (SD) is a synthetic dataset, which is generated by combining noisy and complex backgrounds. CIFAR10 & STL10: Both CIFAR10 and STL10 (Coates et al., 2011) are ten-class classification datasets. In order to make these two sets applicable to our problem, we follow the procedure in French et al. (2017). VisDA: This dataset (Peng et al., 2017) contains a training set (VisDA-T) and a validation set (VisDA-V) of 12 object categories. For classifying these datasets, we apply VGG-11 (Simonyan & Zisserman, 2014) for Digits Recognition, VGG-13 (Simonyan & Zisserman, 2014) for CIFAR10 & STL10, and ResNet-50 (He et al., 2016) and VGG-19 for VisDA. All networks are initialized as the pre-trained version of ImageNet (Deng et al., 2009). We use 3 seeds (2021, 2022, 2023) to conduct all experiments three times and present the average performance. Network architectures, parameters and error bars can be found in the Appendix. 4.1 TARGET-SPECIFIED NTL Effectiveness of NTL in Reducing Target Domain Performance. For digits sets and CIFAR10 & STL10, we pick all possible domain pairs to carry out experiments. As for VisDA, we regard the training set as the source domain and the validation set as the target. We include the results of standard supervised learning with KL divergence in the source domain and report the performance difference provided by using NTL. Table 1 and Figure 3 show results of digits sets, CIFAR10 & STL10, and VisDA. For NTL, we observe that the target performance of all pairs is degraded to nearly 10% with little accuracy reduction in the source domain. The largest performance degradation of the target domain, from 97.0% to 11.7%, occurs when the source is MM and the target is MT. Comparing NTL with supervised learning, the average relative performance degradation for the tar- get domain of all cases is approximately 80%. These results demonstrate that Target-Specified NTL can effectively degrade the performance in the target without sacrificing the source performance. NTL for Ownership Verification. We use a simple pixel-level mask as the trigger patch, which is shown in Figure 2 (please refer to the Appendix for more details). We use 6 state-of-art model watermark removal approaches to test the robustness of NTL-based verification: FTAL (Adi et al., 2018), RTAL (Adi et al., 2018), EWC (Chen et al., 2019), AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). The settings of these methods are included in the Appendix. The results are shown in Table 2. We can see that models trained with NTL behave differently on the data with and without the patch, whereas supervised learning performs nearly the same. Furthermore, all these 6 watermark removal methods fail to improve the performance on the patched data, which indicates that NTL-based ownership verification is effective and robust to state-of-the-art watermark removal methods. 4.2 SOURCE-ONLY NTL Effectiveness of NTL in Reducing Non-source Domain Performance. For all three dataset cases, we select one domain as the source and then conduct our generative adversarial augmentation to generate the auxiliary domain. We set a series of discrete dis-s from 0.1 to 0.5 with a step of 0.1, and for each dis, we generate augmentation data of 4 directions (DIR=4). Table 3 and Figure 3 present the results of Source-Only NTL and its comparison with the supervised learning. Figure 4 is the augmentation data for MNIST (other datasets are included in the Appendix). From the results, we can clearly see that models trained with NTL perform worse on all non-source domains compared with the supervised learning, and MM-MT has the largest degradation from 97.0% to 14.7%. NTL for Applicability Authorization. Follow the implementation steps outlined in Section 3.3, we carry out experiments on all 3 dataset cases. The experiment results of digits are presented in Table 4 (the results of CIFAR10 & STL10 and VisDA are in the Appendix). From the table, we can see that the model performs very well in the authorized domain while having bad performance in all other domains (with or without the authorized patch). The highest classification accuracy of unauthorized domains is barely 42.7%, which will discourage users from employing this model. This shows the effectiveness of NTL in applicability authorization. 5 CONCLUSION AND FUTURE WORK In this paper, we propose Non-Transferable Learning (NTL), a novel training approach that can restrict the generalization ability of deep learning models to a specific data domain while degrading the performance in other domains. With the help of a generative adversarial augmentation framework, NTL is effective both in the presence and absence of target domains. Extensive experiments on 5 digit recognition datasets, CIFAR10 & STL10 and VisDA demonstrate that the ownership of models trained with NTL can be easily verified, and the verification is resistant to state-of-art watermark removal approaches. Moreover, with the training of NTL, model owners can authorize the model applicability to a certain data domain without worrying about unauthorized usage in other domains. In future work, it would be interesting to extend NTL to other tasks beyond image classification, e.g., semantic segmentation, object detection/tracking, and natural language processing (NLP) tasks. For tasks where the input data is not images, generating augmentation data would require different methods and could be challenging. Another possible future direction is Multi-Task Learning, where we could explore whether it is possible to restrict the model generalization ability to certain tasks, for instance, we think it might be useful in some cases to restrict the language model to certain tasks. Moreover, yet another interesting direction could be to combine cryptography with NTLbased verification and authorization. ACKNOWLEDGEMENT We gratefully acknowledge the support by National Science Foundation grants 1834701, 1724341, 2038853, 2016240, Office of Naval Research grant N00014-19-1-2496, and research awards from Facebook, Google, PlatON Network, and General Motors. ETHICS STATEMENT In this paper, our studies are not related to human subjects, practices to data set releases, discrimination/bias/fairness concerns, and also do not have legal compliance or research integrity issues. Non-Transferable Learning is proposed to address the shortcomings of current learning model in intellectual property protection. However, if the model trainer themselves is malicious, they may utilize NTL for harmful purposes. For example, a malicious trainer could use NTL to implant backdoor triggers in their model and release the model to the public. In addition, recently, there are domain adaptation (DA) works on adapting the domain-shared knowledge within the source model to the target one without access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020; Wang et al., 2021). However, if the source model is trained with NTL, we believe that these DA approaches will be ineffective. In other words, our NTL can be regarded as a type of attack to those source-free DA works. REPRODUCIBILITY STATEMENT The implementation code can be found in https://github.com/conditionWang/NTL. All datasets and code platform (PyTorch) we use are public. In addition, we also provide detailed experiment parameters and random seeds in the Appendix. SUMMARY OF THE APPENDIX This appendix contains additional details for the ICLR 2022 article “Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization”, including mathematical proofs, experimental details and additional results. The appendix is organized as follows: • Section A introduces the theoretical proofs for Proposition 1, Theorem 1 and Theorem 2. • Section B provides additional implementation settings, including the network architectures (Sec- tion B.1) and hyper parameters (Section B.2). • Section C provides additional experimental results, including the augmentation data of other datasets (Section C.1), the model authorization results on CIFAR10 & STL10 and VisDA (Section C.2), the experiments of VisDA on VGG-19 (Section C.3), and the error bars of main experiment results (Section C.4). Section C.5 provides the experiment result of different kernel widths. • In Section D, we discuss possible attacks that can be constructed based on our proposed method. Note that NTL used in this appendix is the abbreviation of Non-Transferable Learning. A THEORY PROOFS A.1 PROOF Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (7) Proof: According to Proposition 3.1 in (Achille & Soatto, 2018), there is a Markov Chain: (y,n) → x → z. This chain describes the information flow starting from the ground truth knowledge (label y and nuisance n) of input x to extracted representation z. In this case, the information flows from (y,n) to x then to z. The Data Processing Inequality (DPI) for a Markov Chain can ensure the relation that I(z;x) ≥ I(z;y,n). And with the chain rule, we have I(z;y,n) = I(z;n) + I(z;y|n). Thus, we can obtain I(z;x) ≥ I(z;n) + I(z;y|n). ■ Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. Proof. Suppose that the information flow in classifier Ω of a representation model follow a Markov chain z → ŷ → y, in this case, let’s apply Data Processing Inequality, we have I(z;y) ≤ I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] (8) Because the input data of this representation model is balanced across classes, we suppose that both y and ŷ are drawn from the uniform distribution PUY with K equally likely possibilities. Moreover, though the distribution of ŷ might change a little during training, we can assume it won’t become very biased since the balance of input data. For both ŷ and y, they are vectors with K dimensions. In PyTorch implementation, the computation of KL divergence loss regards there is a scalar random variable corresponding to each vector and every dimension within the vector as an observation of this variable. In this case, the loss between ŷ and y forms like DKL(P(ŷ)∥P(y)) = ∑K i=1 ŷi · log ŷi yi . It is easy to obtain that DKL(P(ŷ)∥P(y)) is non-negative, and if and only if ŷ = y, the KL divergence loss hits the minimum value DKL(P(ŷ)∥P(y)) = 0. While ŷ and y are scalar random variables that equal to the dimension index with the maximum value of ŷ and y, respectively. Therefore, it’s easy to conclude that the probability of ŷ = y will decrease with the increase of DKL(P(ŷ)∥P(y)), i.e., DKL(P(ŷ)∥P(y)) ↑⇒ P(ŷ,y) ↓. For Eq. (8), we can derive it deeper I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] = ∑ y P(y) · ∑ ŷ P(ŷ,y) P(ŷ) log P(ŷ,y) P(ŷ) · P(y) (9) Here, both P(ŷ) and P(y) are uniform distributions PUY , and we have assumed P(ŷ) won’t become very biased at the beginning of this proof. As a result, we regard P(ŷ) and P(y) nearly unchanged after training. In addition, P(ŷ,y) decreases with the increase of DKL(P(ŷ)∥P(y)). Furthermore, we can easily calculate that ∂I(ŷ;y)∂P(ŷ,y) < 0. In this case, I(ŷ;y) decreases with the increase of DKL(P(ŷ)∥P(y)), and the same as I(z;y) since I(ŷ;y) is the upper bound. ■ Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (10) Proof. According to the definition of Shannon Mutual Information, we have I(z;n) = En∼P(n)DKL(P(z|n)∥P(z)) = En∼P(n)Ez∼P(z|n) log P(z|n) P(z) (11) And because two domains have the same number of samples, we can have n conforms P(n) ∼ {P(0)=0.5, P(1)=0.5}, Eq. (11) can re-written as I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) (12) Next, we denote the probability density function (PDF) of PZ|0 and PZ|1 as p(z) and q(z), respectively. Moreover, according to the law of total probability, the PDF of distribution PZ is PDF(PZ) = 0.5p(z) + 0.5q(z), then we have I(z;n) = 0.5 ∫ +∞ −∞ p(z) · log 2p(z) p(z) + q(z) dz + 0.5 ∫ +∞ −∞ q(z) · log 2q(z) p(z) + q(z) dz (13) Subsequently, we denote expectations and variances of PZ|0 and PZ|1 as (µ0, σ0) and (µ1, σ1), respectively. With the assumption that samples of each domain are symmetrically distributed about the centroid, we have pm = max{p(z)} = p(µ0) and qm = max{q(z)} = q(µ1). Thus if we use two variables f and g to denote PDF-s of PZ|0 and PZ|1, i.e., f = p(z) and g = q(z), in this case, we have f ∈ (0, pm] and g ∈ (0, qm]. Based on the above analysis and notations, we can split Eq. (13) into 4 terms as follows, I(z;n) = 0.5 ∫ pm 0 f · log 2f f + g df + 0.5 ∫ pm 0 f · log 2f f + g′ df + 0.5 ∫ qm 0 g · log 2g f + g dg + 0.5 ∫ qm 0 g · log 2g f ′ + g dg (14) here the superscript ′ indicates the right side of f and g. Next, let us consider the Gaussian Kernel estimator MMD(PZ|0,PZ|1; exp). The estimator consists of 3 terms, and we can easily conclude that e−∥z−z ′∥2 decreases with the increase of ∥z − z′∥2. Note that ‘MMD(PZ|0,PZ|1; exp) increases to saturation’ means at least one term increases while the other two terms remain unchanged or increased. For next proof, we need to mention Theorem 2 in (Sriperumbudur et al., 2009). Theorem 2 (Sriperumbudur et al., 2009). Suppose {(Xi, Yi)}Ni=1, Xi ∈ M,Yi ∈ {−1,+1}, ∀i is a training sample drawn i.i.d. from µ. Assuming the training sample is separable, let fsvm be the solution to the program, inf{∥f∥H : Yif(Xi) ≥ 1,∀i}, where H is an RKHS with measurable and bounded kernel k. If k is characteristic, then 1 ∥fsvm∥H ≤ γk(P,Q) 2 (15) where P := 1d ∑ Yi=+1 δXi , Q := 1d ∑ Yi=−1 δXi , d is the sample quantity and δ represents the Dirac measure. This theorem provides a bound on the margin of hard-margin SVM in terms of MMD. Eq. (15) shows that a smaller MMD between P and Q enforces a smaller margin (i.e., a less smooth classifier, fsvm, where smoothness is measured as ∥fsvm∥H). Besides, the Gaussian kernel is a measurable and bounded kernel function in an RKHS. According to this theorem and the nature of hard-margin SVM, we can easily obtained that variances σ0, σ1 of PZ|0 and PZ|1 are decreasing and the difference between expectations µ0 and µ1 is increasing with the saturated increase of MMD, and this conclusion can be found in (Jegelka et al., 2009). In the following, we will prove that I(z;n) will increase when the difference between µ0 and µ1 increases. Due to the symmetry of PDFs, both f and g increase at the left side of their own expectation and decrease at the right side. Without the loss of generality, we assume that µ0 locates at the left side of µ1. For the first term of Eq. (14) which corresponds to the left interval of f , the value of g is smaller than that of the case before increasing the difference between µ0 and µ1. Thus this term will increase when the difference between µ0 and µ1 increase. As for the right interval of f , the maximum value of f + g′ (in the neighborhood of µ1) comes later than that of case before increasing the difference, besides, the maximum value is also smaller. There, for the second term of Eq. (14), it will also increase with the increase of difference between µ0 and µ1. Similarly, the integration of g follows the same trend with that of f . In this case, I(z;n) will increase when the difference between µ0 and µ1 increases. Next, we will prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Without the loss of generality, we assume the variance of PZ|0 decreases while the variance of PZ|1 remain unchanged. For the PDF of a distribution, if the variance decreases, the maximum value of PDF will increase, and there will be two points of intersection with the same value between the PDF-s. Such conclusions are easily proved since the integration of a PDF is always 1. We denote the new maximum value of f as p′m, and the value of two points of intersection as p=. In addition, during the saturated increase of MMD, we can always find a pair of µ0 and µ1 that enables the left side of g to intersect with the right side of f . With the notations in Figure 5. We can change Eq. (14) into Terms of f = ∫ p= 0 f · log 2f f + g df + ∫ p′m p= f · log 2f f + g df + ∫ p′m pm f · log 2f f + g′ df + ∫ pm p= f · log 2f f + g′ df + ∫ p= pµ1 f · log 2f f + g′ df + ∫ pµ1 0 f · log 2f f + g′ df (16) Terms of g = ∫ q1= 0 g · log 2g f + g dg + ∫ q2= q1= g · log 2g f + g dg + ∫ qm q2= g · log 2g f + g dg + ∫ qm 0 g · log 2g f ′ + g dg (17) According to the decrease of σ0, we can conclude: the 1st, 4th, 6th terms of Eq. (16) and the 2nd term of Eq. (17) decrease, while the rest terms of Eq. (16) and Eq. (17) increase. For next proof, we denote a new function, R(f, g) = f · log 2ff+g , and we can get its first-order derivative is ∂R∂f = log 2f f+g + g f(f+g) . We can easily obtain that ∂R ∂f > 0 when f > g. According to these analysis, we can get that the decrease of the 1st term of Eq. (16) can be offset by the added increase of the 5th term of Eq. (16) and the 3rd term of Eq. (17); the decrease of the 4th term of Eq. (16) can be offset by the 2nd term of Eq. (16); the decrease of the 6th term of Eq. (16) can be offset by the 4th term of Eq. (17); the decrease of the 2nd term of Eq. (17) can be offset by the 3rd term of Eq. (16). Moreover, such offsets are overcompensation. In this case, we can prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Considering the combination of the above two cases, if the difference between expectations increases and variances of two distributions decrease, the mutual information will increase. ■ A.2 OBSERVE THE MUTUAL INFORMATION We follow the similar process of (Achille & Soatto, 2018) to observe the change of mutual information I(z;n). To be specific, let Θ be a binary classifier that given representation z and nuisance n tries to predict whether z is sampled from the distribution of one domain PZ|0 or another domain PZ|1. According to (Sønderby et al., 2016), if we train Θ with the loss of Ez∼PZ|0(logΘ(z)) + Ez∼PZ|1(log 1−Θ(z)), there is always a Bayes-optimal Θ∗, Θ∗ = P(z|n = 0) P(z|n = 0) + P(z|n = 1) (18) With Eq.(12), if we assume the Θ0,Θ1 trained with Ez∼PZ|0(logΘ0(z))+Ez∼PZ|1(log 1−Θ0(z)) and Ez∼PZ|1(logΘ1(z)) + Ez∼PZ|0(log 1−Θ1(z)), respectively, are close to the optimal ones Θ∗0,Θ ∗ 1, we have I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) = 0.5Ez∼P(z|n=0) log 2Θ0(z) + 0.5Ez∼P(z|n=1) log 2Θ1(z) (19) With this approximation, we train Θ0 and Θ1 for the model of every NTL training round, and we get the curve of I(z;n) shown in Figure 6 (MNIST). According to the figure, I(z;n) is increasing during the overall training process, which is consistent with our intention. B IMPLEMENTATION SETTINGS B.1 NETWORK ARCHITECTURE To build the classification models, we use several popular architectures as the bottom feature extractor and attach them with fully-connected layers as the top classifier, which are shown in Table 5. Specifically, the backbone network of digits is VGG-11, that of CIFAR10 & STL10 is VGG-13, and we use both ResNet-50 and VGG-19 for VisDA. The classifiers of all models are the same, i.e., 3 linear layers with ReLU and dropout. As for the GAN in the augmentation framework, the generator G is made up of 4 ConvTranspose blocks and 2 Residual blocks, and the discriminator D consists of a feature extractor with 4 convolution layers, a binary classifier and a multi-class classifier. These two classifiers are composed of sequential fully-connected layers and share the same representations extracted from the front extractor. The detailed architecture is shown in Table 6 and 7. B.2 HYPER PARAMETERS Scaling factors and upper bounds. As introduced in Section 3.1 of the main paper, there are two scaling factors (α, α′) that control the trade-off between the maximization of I(z;n) and the sufficiency property of the source domain. Here, we conduct experiments using different values (α = 0.01, 0.05, 0.10, 0.20, 0.50 and α′ = 0.01, 0.05, 0.10, 0.20, 0.50), and evaluate their impact to the performance of NTL. For Target-Specified NTL, we select the combination of MNIST→USPS, STL10→CIFAR10 and VisDA-T→VisDA-V. For Source-Only NTL, we choose MNIST→Non-S, STL10→Non-S and VisDA-T→Non-S as the representatives to carry out experiments. The results are presented in Tables 8 and 9. It is easy to conclude that NTL can work effectively with different scaling factors. As for the upper bounds (β, β′), we set them for the sake of preventing the auxiliary domain loss and the MMD distance from dominating the optimization objective, affecting the convergence of training. Training parameters. For the optimization of NTL, we utilize Adam as the optimizer, with learning γ = 0.0001 and batch size of 32. For all datasets, we randomly select 8,000 samples from their own training sets as the source data, and 1,000 samples from their own testing sets as the test data (if a dataset does not have test set, we select its test data from the training set without overlapping with the chosen 8,000 source samples). And the sample quantities of the source and auxiliary domain are always the same. In the training of adversarial augmentation, the optimizer is also Adam, and we set the learning rate to γ = 0.0002 with two decay momentums 0.5 and 0.999. The batch size is 64, and the dimension of the latent space fed to the generator is 256. B.3 TRIGGERING AND AUTHORIZATION PATCH As mentioned in Section 4.1 and 4.2 of our main paper, we attach a patch on the data to utilize NTL for ownership verification and usage authorization. We create the patch in a simple way. Specifically, for the pixel of i-th row and j-th column in an RGB image, if either i or j is even, then a value of v is added to the R channel of this pixel (the channel value cannot exceed 255). Intuitively, the patch is dependent on pixel values of each image. Thus the changes of feature space brought by the attachment of these patches for various images are not the same. In our experiments, if the content of the image is simple, e.g., MNIST, USPS and SVHN, the v with a small value can shift the feature space sufficiently, but for more complicated images, we have to increase v to enable source images attached with and without the patch differentiable. Specifically, we pick the value as follows: MNIST, USPS, SVHN (v = 20); MNIST-M, SYN-D, CIFAR10, STL10 (v = 80); VisDA (v = 100). As mentioned in the main paper, we will explore the unforgeability and uniqueness of patch generation in the further work. B.4 IMPLEMENTATION OF WATERMARK REMOVAL APPROACHES In the Section 4.1 of the main paper, we implement 6 model watermark removal approaches to verify the effectiveness of NTL-based ownership verification. Here, we introduce how to implement these approaches. FTAL (Adi et al., 2018) is an approach that fine-tunes the entire watermarked model using the original training data. To implement it, we use 30% of training set that has been learned by NTL to fine-tune the entire model. When using RTAL (Adi et al., 2018), the top classifier is randomly initialized before fine-tuning. In our experiments, we load the feature extractor of the model trained with NTL and randomly initialize a classifier to attach on the extractor, and then use 30% of the training set to fine-tune this combined model. As for EWC (Chen et al., 2019), we use the code of (Chen et al., 2019) to compute the fisher information of network parameters and adjust the learning rate of fine-tuning. The data used by EWC is also 30% of the training set. Finally, AU (Chen et al., 2019) utilizes the watermarked model to pseudo label additional unlabeled samples from other similar domains, and these samples will be used to fine-tune the model together with the original training set. Following this principle, we use 30% of the training set and the same quantity of unlabeled samples from other domains (the proportion ratio between these two parties is 1:1) to fine-tune the model trained with our NTL. We conduct all fine-tuning methods for 200 epochs. For the watermark overwriting, we overwrites a new backdoor-based watermark (Zhang et al., 2018) on the model trained with NTL. Specifically, we attach a white corner (3 × 3) as the backdoor trigger to 1/15 of the training set, and follow the training approach of (Zhang et al., 2018) to write the watermark on the model. In addition, similar to other watermarking works (Rouhani et al., 2018), we also test if NTL-based verification is resistant to model pruning, and apply a layer-wise pruning method (Han et al., 2015) to prune 70% parameters of the model trained with NTL. C ADDITIONAL EXPERIMENTAL RESULTS C.1 AUGMENTATION DATA OF OTHER DATASETS In the main paper, we present the augmentation data of MNIST, and in this section, we include the augmentation data of other datasets as follows: Figure 7 for USPS, Figure 8 for SVHN, Figure 9 for MNIST-M, Figure 10 for SYN-D, Figure 11 for CIFAR10, Figure 12, Figure 13 for VisDA-T. C.2 MODEL USAGE AUTHORIZATION ON CIFAR10 & STL10 AND VISDA Here we present the experiment of authorizing the model usage on CIFAR10 & STL10 and VisDA, shown in Table 10. According to the results, the model performs well on the data attached with the authorized patch and has bad performance on all other samples. C.3 ADDITIONAL RESULTS OF VISDA ON VGG-19 To demonstrate the effectiveness of NTL on different network architectures, we also carry out experiments of VisDA on VGG-19. All other settings are the same as before, and the results are shown in Table 11. We can easily see that the performance is consistent with the aforementioned other experiments, which shows the wide applicability of NTL. C.4 ERROR BAR We conduct all experiments with three random seeds (2021, 2022, 2023), and present the error range in this section. Table 12 is the error range of Target-Specified NTL corresponding to Table 1 of the main paper; Table 13 presents the error of experiments on Source-Only NTL corresponding to Table 3 of the main paper; Table 14 shows the error of model authorization which is presented as Table 4 in our main paper. C.5 THE IMPACT OF GAUSSIAN KERNEL BANDWIDTH In our implementation, we utilize a series of Gaussian kernels to approximate MMD, which is implemented as the MK-MMD in Long et al. (2015). Specifically, the bandwidth in our used kernels is controlled by two parameters mul and num (we use mul = 2.0, num = 5 in our experiments presented in the main text of the paper). The bandwidth of these kernels is as follows: B = { ∥x1 − x2∥2 · muli−⌊num/2⌋ (n2 − n) }num−1 i=0 (20) where x1 and x2 are two input data batches with size n, and ⌊·⌋ extracts the integer part of the input. To investigate the impact of kernel bandwidth, we select a series of mul-s and num-s to conduct Source-Only NTL experiments on MNIST, CIFAR10 and VisDA-T, and the results are shown in Table 15. According to the results, we can observe that the performance difference between the source and target is nearly the same with different mul-s and num-s. As these two parameters directly determine the kernel bandwidth, these results demonstrate that the kernel bandwidth does not have a significant impact on NTL performance. D POSSIBLE ATTACKS BASED ON NTL Although we propose the Non-Transferable Learning for protecting the Intellectual Property in AIaaS, if the model owner is malicious, they can also utilize NTL to poison or implant backdoor triggers to their model evasively and release the model to the public. In the setting of applying Target-Specified NTL to verify the model ownership, the patch we used can also be regarded as a trigger for certain misclassification backdoor. From the results of ownership verification in the main paper, we can see the possibility of launching NTL-based target backdoor attacks. As for the case of Source-Only NTL, our objective is shaped like an universal poison attack by restricting the generalization ability of models. The results in our main paper demonstrate the feasibility of this poison attack. In addition, recently, there are more domain adaptation (DA) works about adapting the domain-shared knowledge within the source model to the target one without the access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020). However, if the source model is trained with Source-Only NTL, we believe that these DA works will be ineffective. In other words, our NTL can be regarded as a type of attack to these source-free DA works.
1. What is the focus of the paper regarding deep learning models? 2. What are the strengths of the proposed non-transferable learning method? 3. Are there any weaknesses in the presentation of the paper? 4. What related works are missing in the paper regarding adversarial attacks and IP protection? 5. How could the notations and symbols used in the paper be improved? 6. Do you have concerns about the repetition of experiments and the lack of standard deviation values? 7. How might increasing the kernel bandwidth impact the results?
Summary Of The Paper Review
Summary Of The Paper In the era of deep learning, pre-trained models have been regarded as intellectual properties of AI companies. Thus, protecting these models has been more and more important. To achieve this aim, this paper proposes a non-transferable learning (NTL) method to capture the exclusive data representation in the learned model and restrict the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, the NTL-based ownership verification provides robust resistance to state-of-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. The NTL-based authorization approach instead provides a data-centric protection, which is called applicability authorization, by significantly degrading the performance of the model on unauthorized data. In general, this paper contributes a novel method to the field, and experiments verified the success of the proposed method. Review Pros: The research direction is promising and important in the real world. Nowadays, AI companies will train their own deep models with abundant labelled data that costs a lot of resources. Thus, it is a good timing to research how to protect these models, which have become very important and practical. This paper proposed a method that can be effective solutions to both model verification and authorization, which is general and is promising to be applied in other applications. This paper is easy to follow. Experiments are enough to support the claims made in this paper. A plus should be that experiments are conducted with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. Cons: The presentation should be improved. The first paragraph in intro is too long. It is better to divided it into several paragraphs to better demonstrate the key points of this paper. I am not sure if it is necessary to list the contributions in the introduction. Such contributions have been described clearly in intro and abs. It seems that you do not need to restate them. Key related works are missing. For an AI company, they need to be aware of many adversarial attacks, such as reprogramming attacks, model-inversion attacks. These works are also related to IP protection of deep learning. It would be better to conclude these attacks as related works as well. Some discussions should be also added for general readers of ICLR. Some notations should be changed. For example, we will not use X or Y to present distributions, instead, we will use them to represent random variables. It is better to use \sP_X to represent the distribution corresponding to a random variable X. It is unnecessary to use GMMD, you can use MMD(P,Q; k), where k is a Gaussian kernel (you can follow the notations from recent deep kernel MMD papers). How many times do you repeat your experiments? I did not see error bar/STD values of your methods. This should be provided to verify that the experimental results are stable. If we consider to add bandwidth to your kernel function, how does the kernel bandwidth affect your results?
ICLR
Title Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization Abstract As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to stateof-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets. 1 INTRODUCTION Deep Learning (DL) is the backbone of Artificial Intelligence as a Service (AIaaS) (Ribeiro et al., 2015), which is being provided in a wide range of applications including music composition (Briot et al., 2020), autonomous driving (Li et al., 2021a), smart building (Xu et al., 2020a), etc. However, a good model can be expensive to obtain: it often requires dedicated architecture design (He et al., 2016), a large amount of high-quality data (Deng et al., 2009), lengthy training on professional devices (Zoph & Le, 2016), and expert tuning (Zhang et al., 2019). Thus, well-trained DL models are valuable intellectual property (IP) to the model owners and need protection. Generally speaking, there are two aspects in protecting an IP in AIaaS, verifying who owns the model and authorizing how the model can be used. These two aspects led to the development of two types of protection techniques: ownership verification and usage authorization. For ownership verification, prior works proposed approaches such as embedding watermarks into network parameters (Song et al., 2017), learning special behaviors for pre-defined triggers (Fan et al., 2019), and extracting fingerprints from the model (Le Merrer et al., 2020). However, they are vulnerable to state-of-art watermark removal approaches that are based on model fine-tuning or retraining (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). For model usage authorization, most prior works were built on encrypting neural network parameters with a secret key (Alam et al., 2020; Chakraborty et al., 2020) and ensuring that models can only be used by users with this key. However, authorized users may use the model on any data without restriction. We believe that for comprehensive IP protection, the goal of usage authorization is not *These authors contributed equally to this work. Data space Reduced model generalization bound Source domain Source-Only NTL Source domain Outside of source domain Data space Target Target-Specified NTL Source domain Generalization bound Data space Supervised Learning Figure 1: A visualization of the generalization bound trained with different approaches. The left figure shows Supervised Learning in the source domain, which can derive a wide generalization area. When Target-Specified NTL is applied (middle), the target domain is removed from the generalization area. As for Source-Only NTL (right), the generalization area is significantly reduced. only who is allowed to use the model, but also what data can the model be used on. We thus consider a new data-centric aspect of usage authorization in this work, i.e., authorizing models to certain data for preventing their usage on unauthorized data. We call this applicability authorization. Note that applicability authorization goes far beyond IP protection. It can also be viewed as a way to “control” how machine learning models are used in general. One example would be a company (e.g., Meta) trains a recommendation system from adult data and uses applicability authorization to prevent this system from being used by teenagers. Our Approach and Contribution. In this work, we propose Non-Transferable Learning (NTL), a novel approach that can robustly verify the model ownership and authorize the model applicability on certain data. Intuitively, NTL goes against the current research trend of improving the generalization ability of models across various domains, e.g., domain generalization and adaptation (Zhou et al., 2020; Dong et al., 2020). Instead, NTL tries to make the generalization bound of DL models more explicit and narrower, by optimizing the model to learn domain-dependent features and thereby making the model exclusive to certain domains. More specifically, we consider two domains: the source domain where we want the models to perform well, and the auxiliary domain where we aim to degrade the model performance. And if the model trained with NTL is applied to a target domain similar to the auxiliary one, the performance should also be poor. As shown in Figure 1, we have developed two types of NTL approaches: Target-Specified NTL and Source-Only NTL. • Target-Specified NTL assumes that the source and target domains are both known. We then treat the target domain as the auxiliary domain and enlarge the distance of representations between the source and auxiliary domains. Target-Specified NTL can be used to verify the model ownership by triggering misclassification. While previous model watermarks can often be easily removed because the model memorization of such watermarks encounters catastrophic forgetting (Kemker et al., 2018) during watermark removal, our NTL-based verification is resistant to state-of-art watermark removal approaches, because the misclassification behavior is dependent on the overall target-private features that have little correlation with the source-private features for the main task. • In Source-Only NTL, the target domain is unknown and thus our approach relies solely on the source domain, aiming to degrade the performance in all other domains. In this case, NTL generates the auxiliary domain from a novel generative adversarial augmentation framework and then increases the representation distance. Source-Only NTL can provide authorization to certain data rather than particular users or devices, by degrading the model performance on all other data domains other than the source domain. This provides data-centric applicability authorization, with which we can also prevent unauthorized model usage that are caused by the secret key leakage and cannot be addressed by prior model authorization methods. In addition to proposing the novel concept of NTL and developing its two approaches, we are also able to experimentally validate their effectiveness. We conducted extensive experiments on 5 digit sets, CIFAR10 & STL10 and VisDA. For target-specified cases, we demonstrate how to apply NTL for model ownership verification. Our experiments show that the state-of-art model watermark removal methods are ineffective on NTL-based ownership verification. For source-only NTL, our experiments demonstrate its effectiveness in authorizing model applicability to certain data. 2 RELATED WORK Domain Generalization & Adaptation (DG & DA). DG aims to generalize learning models with available source domains to unseen target domains (Blanchard et al., 2011). A number of methods have been proposed for domain discrepancy minimization (Li et al., 2020), adversarial training (Rahman et al., 2020; Zhao et al., 2020c), invariance representation learning (Zhou et al., 2020; Piratla et al., 2020), etc. Recently, there is significant interest on conducting DG with one source domain only, for which well-crafted data augmentation approaches (Qiao et al., 2020; Zhao et al., 2020b; Li et al., 2021b; Xu et al., 2020b) have been proposed to expand the input space. DA is also related to improving the generalization ability of models across domains (Ahmed et al., 2021), and while DA can access the target data, DG has no access to any target sample (Xu et al., 2021; Dong et al., 2021). Unlike DG or DA, we try to weaken the generalization ability of models by expanding the distance between representations of different domains. Our method works effectively for both the target-specified and the source-only cases with a novel adversarial augmentation framework. Intellectual Property (IP) Protection for Deep Learning (DL). While DL has shown its unparalleled advantages in various applications, there are significant challenges in protecting DL models. For instance, Inference Attack (Shokri et al., 2017; Wang et al., 2019) can steal private information about the target DL model. Model Inversion Attack (He et al., 2019; Salem et al., 2020) is able to recover the input data via an analysis of the model prediction. These two types of attacks directly threaten the privacy of model users, while there are also many active attacks (Suciu et al., 2018; Yao et al., 2019) that lead DL models to produce abnormal behaviors. In addition, verifying model ownership and authorizing model usage have become important issues with the development of AIaaS. There have been a number of watermarking approaches addressing the verification of model ownership. For instance, Zhang et al. (2018) and Li et al. (2019) train a neural network on the original datasets and the watermarked one assigned with a particular label, which makes the model behave abnormally when it encounters watermarked data. Song et al. (2017) and Uchida et al. (2017) inject a pattern that is similar to regular photograph watermarks (Cheng et al., 2021) into the least significant bits of the model parameters and provide the corresponding decoding methods. Le Merrer et al. (2020) and Zhao et al. (2020a) make use of adversarial examples to extract fingerprints from learned neural networks without accessing network weights. Compared to these approaches, our NTL can achieve model ownership verification by triggering universal misclassification. Moreover, with extensive experiments, we also demonstrate that state-of-art model watermark removal methods, e.g., FTAL and RTAL (Adi et al., 2018), EWC and AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018) are not effective to NTL-based verification. Model usage authorization is another aspect in protecting model intellectual property. For instance, Alam et al. (2020) encrypt every network parameter with a secret key. Chakraborty et al. (2020) generate a secret key from hardware fingerprints of a particular device, and require that only users who possess this device can load and employ the model. Different from these methods, our NTL focuses on providing data-centric protection via applicability authorization, which retains good model performance on authorized data while degrading model performance for other data domains. To the best of our knowledge, this is the first work that prevents model usage on unauthorized data via model learning. 3 METHODOLOGY In this section, we introduce our NTL approach. Section 3.1 presents the inspiration and the design of the optimization objective of NTL, which is the core for both target-specified and source-only cases. Section 3.2 presents the generative augmentation framework for source-only cases. Our method is based on the concept of generative adversarial networks (GAN), however our goal is not to propose a new GAN but to design an effective augmentation method in the context of NTL. Section 3.3 introduces the application of NTL on ownership verification and applicability authorization. 3.1 NON-TRANSFERABLE LEARNING WITH DISTANCE EXPANSION OF REPRESENTATION We consider a source domain with labeled samples S= {(x,y)∥x∼PSX ,y∼PSY }, where PX and PY are the input and label distributions, respectively. In this work, we use image classification as the learning task with K possible classes, in which case x and y are matrix-valued and scalar random variables, respectively. In addition, we consider an auxiliary domain A={(x,y)∥x∼PAX ,y∼PAY }. The source domain S and the auxiliary domain A will be fed into a deep neural network, and without loss of generality, we split the neural network into two parts, one is a feature extractor Φ on the bottom, and the other is a classifier Ω on the top. Inspiration from Information Bottleneck. Our NTL, in particular the design of optimization objective, is inspired by the analysis of Information Bottleneck (IB) (Tishby et al., 2000). Let us start by introducing Shannon Mutual Information (SMI). In addition to input x and label y, we also regard representation z extracted by Φ as a random variable. The SMI between two random variables, e.g., between z and x, is defined as I(z;x)=Ex∼PX [DKL(P(z|x)∥P(z))], where DKL(·) represents the Kullback-Leible (KL) divergence and P(·) is the distribution. In IB theory, considering the effectiveness, privacy and generalization, an optimal representation has three properties (Achille & Soatto, 2018): (1) Sufficiency: label y sufficiently differentiates representation z, i.e., I(z;y)=I(x;y); (2) Minimality: z needs to represent as little information about input x as possible, i.e., min I(z;x); (3) Invariance: z is optimal, meaning that it does not overfit to spurious correlations between y and nuisance n embedded in x, i.e., I(z;n) = 0. IB theory assumes that nuisance n is a factor that affects input x, and it works with y together to determine what x looks like to some extent. For instance, in domain generalization, nuisance n can be regarded as a domain index that indicates which domain a certain sample comes from (Du et al., 2020). In our problem, different from the objective of the IB theory, NTL enforces the models to extract nuisance-dependent representations, which is opposite to the property of invariance. In other words, we aim to increase I(z;n), and we have the following proposition for achieving this aim. Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (1) The detailed proof for Proposition 1 is included in the Appendix. Optimization Objective Design. Proposition 1 provides guidance for maximizing I(z;n). First, unlike in the IB theory, we do not minimize I(z;x) for the minimality property. In addition, we try to minimize I(z;y|n) through the design of optimization objective that measures the error between the model prediction and the ground truth during the training of neural networks. Specifically, instead of using the typical CrossEntropy loss to measure the error, we apply KL divergence loss to direct the training, and we have the following theorem. Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. The detailed proof of Theorem 1 is provided in the Appendix. According to this theorem, I(z;y|n) can be minimized by increasing the KL divergence loss of training data conditioned on different n. However, as stated in Section 1, we aim to degrade the model performance in the auxiliary domain while maintaining good model performance in the source domain. Thus, we only minimize I(z;y|n) by increasing the KL divergence loss of the auxiliary domain data. In order to achieve this goal, we design a loss L∗ntl that shapes like a minus operation between KL divergence losses of the source and auxiliary domain (LS , LA), i.e., LS = Ex∼PSX [DKL(P(Ω(Φ(x)))∥P(y))] and LA=Ex∼PAX [DKL(P(Ω(Φ(x)))∥P(y))]. Specifically, this loss can be written as follows: L∗ntl = LS −min(β, α · LA) (2) Here, α is the scaling factor for LA (α = 0.1 in our experiments), and β is an upper bound when LA gets too large and dominates the overall loss (β=1.0 in experiments; please see the Appendix for more details about α and β). Moreover, if we use n = 0 and n = 1 to denote the source and auxiliary domain respectively, the optimization of Eq. (2) can guarantee the sufficiency property for the source domain: I(z;y|n=0)=I(x;y|n=0), and increasing LA decreases I(z;y|n=1). According to Proposition 1, we can move the upper bound of I(z;n) to a higher baseline via optimizing Eq. (2). However, such optimization might only make classifier Ω more sensitive to domain features and have little effect on feature extractor Φ. In this case, representations of different domains captured by Φ may still be similar, which conflicts with our intention to maximize I(z;n), and the performance of the target can be easily improved by fine-tuning or adapting Ω with a small number of labeled target samples. On the other hand, directly calculating I(z;n) and taking it as a part of the optimization objective are difficult, especially in the optimization of representation learning (Torkkola, 2003). Achille & Soatto (2018) apply binary classifier as the nuisance discriminator, and they can estimate I(z;n) after the model training via this discriminator. Here, we find another way to increase I(z;n) indirectly based on the following theorem. Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (3) We also employ a nuisance discriminator to observe the change of I(z;n) during training. The details of this discriminator design and the proof of Theorem 2 can be found in the Appendix. NTL Optimization Objective. Based on the above analysis, we design our NTL optimization objective to increase I(z;n) and extract nuisance-dependent representations. Specifically, we compute the MMD(P,Q; exp) between representations of the source and auxiliary domain data and maximize it. For stability concern, we also set an upper bound to the MMD(P,Q; exp). Then, the overall optimization objective of NTL with distance expansion of representation is shaped as follows: Lntl = LS −min(β, α ·LA ·Ldis), whereLdis = min(β′, α′ ·MMD(Px∼PS X (Φ(x)),Px∼PA X (Φ(x)); exp) (4) Here, α′ and β′ represent the scaling factor and upper bound of Ldis respectively (α′ = 0.1 and β′=1.0 in our experiments; please refer to the Appendix for more details about α′ and β′). Φ(·) is the feature extractor that outputs the corresponding representations of given inputs. When the target domain is known and accessible, it will be regarded as the auxiliary domain, and the above NTL with distance expansion of representation can be conducted directly on the source and auxiliary domains. We call such cases Target-Specified NTL. 3.2 SOURCE DOMAIN AUGMENTATION FOR SOURCE-ONLY NTL In practice, the target domain might be unknown or unavailable. For such cases, we develop a novel generative augmentation framework to generate an auxiliary domain and then leverage the above NTL process, in what we call Source-Only NTL. In the following, we will introduce our augmentation framework, which can generate data samples drawn from the neighborhood distribution of the source domain with different distances and directions to serve as the auxiliary data domain in NTL. GAN Design for Source Domain Augmentation. The overall architecture of our augmentation framework is shaped like a generative adversarial network (GAN) that is made up of a generator G and a discriminator D. G takes in normal noise and a label in form of one-hot, and then outputs a data sample. For D, if we feed D with a sample, it will tell whether this sample is fake or not and predict its label. The adversarial battle happens as G tries to generate data as real as possible to fool D, while D distinguishes whether the data being fed to it is real or not to the best of its ability. After sufficient period of such battle, the distributions of the generated data and the ground-truth data are too similar to tell apart (Li et al., 2017). Based on this principle, we utilize G to approximate the source domain. However, if we follow the standard GAN training, the trained GAN will not generate samples with deterministic labels. Therefore, we combine the intuitions of CGAN (Mirza & Osindero, 2014) and infoGAN (Chen et al., 2016) to propose a new training approach for our augmentation framework. In our approach, G uses MSE loss to compare its generated data and the real data. D consists of three modules: a feature extractor and two classifiers behind the extractor as two branches, where a binary classifier predicts whether the data is real or not, and a multiple classifier outputs the label. Note that these two classifiers both rely on the representations extracted by the feature extractor. For training D, we use MSE loss to evaluate its ability to distinguish real samples from fake ones, and KL divergence to quantify the performance of predicting labels for the real data. Finally, there is an additional training step for enforcing the GAN to generate samples of given labels by optimizing G and D simultaneously. The training losses of LG , LD and LG,D are: LD = Ex∼PS X ,y∼PS Y [ 1 2 (∥Db(x), 1∥2 + ∥Db(G(noise,y)), 0∥2) +DKL(P(Dm(x)) ∥P(y) ] LG = Ey∼PS Y [ ∥Db(G(noise,y)), 1∥2 ] , LG,D = Ey′∼PU Y [DKL(P(Dm(G(noise,y′))) ∥P(y′)] (5) Algorithm 1: Generative Adversarial Data Augmentation for Source-Only NTL Require: Source domain data S={(x,y)∥x ∼ PSX ,y ∼ PSY }; Generator G, discriminator D; List of augmentation distance DIS, the maximum augmentation direction DIR; GAN training epochs eGAN, augmentation training epochs eAUG; Initialize the auxiliary domain data A = [ ]; Output: The auxiliary domain data A = {(x,y)∥x ∼ PAX ,y ∼ PAY }; 1 for i = 1 to eGAN do 2 use (noise,y ∼ PSY ) to optimize G with LG , use S and G(noise,y ∼ PSY ) to optimize D with LD; 3 use (noise,y ∼ PUY ) to optimize G,D with LG,D; 4 for dis in DIS do 5 for dir to DIR do 6 for l in G do 7 interval = d(l) / DIR; // function d() acquires the dimension of inputs; 8 freeze D and l[0 : dir × interval]; // freeze D, and the first dir parts of l-th layer in G; 9 for i = 1 to eAUG do 10 use (noise,y ∼ PSY ) and S to optimize G with Laug; 11 A ∪ G(noise,y ∼ PUY ); // use G to generate augmentation data; Here, we use subscripts b and m to denote outputs from the binary classifier and the multiple classifier of D, respectively. The noise of Eq. (5) is drawn from Gaussian Noise Pg =N (0, 1), while y′ is drawn from the uniform distribution PUY with K equally likely possibilities. And y and y′ are the one-hot form vectors of y and y′, respectively. Augmentation with Different Distances. To generate the data of different distances to the source domain, we apply a Gaussian estimator to measure the MMD between distributions of the source and the generated data from G. However, providing that the MMD distance is optimized to increase with no restriction, the outcome will lose the semantic information, i.e., the essential feature for the main task. In order to preserve such semantic information, we use the CrossEntropy loss to add a restriction to the optimization objective. With this restriction, we set multiple upper bounds – DIS for generating data with different distances (we use DIS to denote a list of multiple dis-s with various lengths). The specific objective is as follows: Laug=−min { dis,MMD(Px∼PS X (Dz(x)),Py∼PS Y (Dz(G(noise,y))); exp) } + Ey∼PS Y DCE(Dm(G(noise,y)),y) (6) Here, subscript z denotes outputs from the feature extractor. For every dis, we freeze D and use Eq. (6) to optimize G. After the optimization, we can generate augmentation data via feeding G with normal noise and labels drawn from PUY . Augmentation with Different Directions. We also investigate how to generate data in different directions. The optimization of Gaussian MMD follows the direction of gradient, which is the fastest way to approach the objective. In such case, all augmented domains of different distances might follow the same direction, i.e., the direction of gradient. Therefore, in order to augment neighborhood domains with different directions, we need to introduce more restrictions to the optimization process. Specifically, for intermediate representations of G, we view each filter (neuron) as corresponding to a feature dimension of the representation. At the beginning of directional augmentation, we make multiple copies of the trained GAN in the last step (G and D), and we pick one GAN for each direction. If we want to augment the source in DIR directions, we will divide the overall network of G into DIR parts equally. For the augmentation of the first direction, the first part of G will be frozen and not updated during optimization. The second direction will be augmented by freezing the first two parts of the network and conducting the optimization. The third corresponds to the first three parts, and so on. Given a certain dis, we can optimize Eq. (6) to augment the source domain to DIR directions by freezing G gradually. The detained flow is shown in Algorithm 1. 3.3 APPLICATION OF NTL FOR MODEL INTELLECTUAL PROPERTY PROTECTION Ownership Verification. The proposed NTL can easily verify the ownership of a learning model by triggering misclassification. To achieve this, we can attach a certain trigger patch which shapes like a shallow mask for images on the source domain data as the auxiliary domain data, and then conduct NTL on these two domains. After that, the training model will perform poorly on the data with the patch but have good performance on the data without the patch. Such model behavior difference can be utilized to verify the ownership, which is similar to what regular backdoor-based model watermarking approaches do. In contrast, models trained with other methods often present nearly the same performance on the data with or without the patch, as the patch is shallow and light. Applicability Authorization. When applying NTL to authorize the model applicability, we aim to restrict the model generalization ability to only the authorized domain, where all the data is attached with a dedicated patch, and we need to make sure that the patch design will not impact the semantic information (note that the unforgeability and uniqueness of the patch are not the main consideration of this work, and we will explore it in the future). For simplicity, we use a shallow mask that is similar to that of the aforementioned ownership verification as the authorized patch. We first use our generative adversarial augmentation framework to generate neighborhood data of the original source domain. Then, we regard the source data with the patch attached as the source domain in NTL, and use the union of the original source data, the generated neighborhood data with and without the patch as the auxiliary domain. After the NTL training on these two domains, the learning model performs well only on the source domain data with the authorized patch attached, and exhibits low performance in other domains. In this way, we achieve the model applicability authorization. 4 EXPERIMENTAL RESULTS Our code is implemented in PyTorch (and provided in the Supplementary Materials). All experiments are conducted on a server running Ubuntu 18.04 LTS, equipped with NVIDIA TITAN RTX GPU. The datasets and experiment settings used are introduced below. Digits. MNIST (Deng, 2012) (MT) is the most popular digit dataset. USPS (Hull, 1994) (US) consists of digits that are scanned from the envelopes by the U.S. Postal Service. SVHN (Netzer et al., 2011) (SN) contains house number data selected from Google Street View images. MNISTM (Ganin et al., 2016) (MM) is made by combining MNIST with different backgrounds. Finally, SYN-D (Roy et al., 2018) (SD) is a synthetic dataset, which is generated by combining noisy and complex backgrounds. CIFAR10 & STL10: Both CIFAR10 and STL10 (Coates et al., 2011) are ten-class classification datasets. In order to make these two sets applicable to our problem, we follow the procedure in French et al. (2017). VisDA: This dataset (Peng et al., 2017) contains a training set (VisDA-T) and a validation set (VisDA-V) of 12 object categories. For classifying these datasets, we apply VGG-11 (Simonyan & Zisserman, 2014) for Digits Recognition, VGG-13 (Simonyan & Zisserman, 2014) for CIFAR10 & STL10, and ResNet-50 (He et al., 2016) and VGG-19 for VisDA. All networks are initialized as the pre-trained version of ImageNet (Deng et al., 2009). We use 3 seeds (2021, 2022, 2023) to conduct all experiments three times and present the average performance. Network architectures, parameters and error bars can be found in the Appendix. 4.1 TARGET-SPECIFIED NTL Effectiveness of NTL in Reducing Target Domain Performance. For digits sets and CIFAR10 & STL10, we pick all possible domain pairs to carry out experiments. As for VisDA, we regard the training set as the source domain and the validation set as the target. We include the results of standard supervised learning with KL divergence in the source domain and report the performance difference provided by using NTL. Table 1 and Figure 3 show results of digits sets, CIFAR10 & STL10, and VisDA. For NTL, we observe that the target performance of all pairs is degraded to nearly 10% with little accuracy reduction in the source domain. The largest performance degradation of the target domain, from 97.0% to 11.7%, occurs when the source is MM and the target is MT. Comparing NTL with supervised learning, the average relative performance degradation for the tar- get domain of all cases is approximately 80%. These results demonstrate that Target-Specified NTL can effectively degrade the performance in the target without sacrificing the source performance. NTL for Ownership Verification. We use a simple pixel-level mask as the trigger patch, which is shown in Figure 2 (please refer to the Appendix for more details). We use 6 state-of-art model watermark removal approaches to test the robustness of NTL-based verification: FTAL (Adi et al., 2018), RTAL (Adi et al., 2018), EWC (Chen et al., 2019), AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). The settings of these methods are included in the Appendix. The results are shown in Table 2. We can see that models trained with NTL behave differently on the data with and without the patch, whereas supervised learning performs nearly the same. Furthermore, all these 6 watermark removal methods fail to improve the performance on the patched data, which indicates that NTL-based ownership verification is effective and robust to state-of-the-art watermark removal methods. 4.2 SOURCE-ONLY NTL Effectiveness of NTL in Reducing Non-source Domain Performance. For all three dataset cases, we select one domain as the source and then conduct our generative adversarial augmentation to generate the auxiliary domain. We set a series of discrete dis-s from 0.1 to 0.5 with a step of 0.1, and for each dis, we generate augmentation data of 4 directions (DIR=4). Table 3 and Figure 3 present the results of Source-Only NTL and its comparison with the supervised learning. Figure 4 is the augmentation data for MNIST (other datasets are included in the Appendix). From the results, we can clearly see that models trained with NTL perform worse on all non-source domains compared with the supervised learning, and MM-MT has the largest degradation from 97.0% to 14.7%. NTL for Applicability Authorization. Follow the implementation steps outlined in Section 3.3, we carry out experiments on all 3 dataset cases. The experiment results of digits are presented in Table 4 (the results of CIFAR10 & STL10 and VisDA are in the Appendix). From the table, we can see that the model performs very well in the authorized domain while having bad performance in all other domains (with or without the authorized patch). The highest classification accuracy of unauthorized domains is barely 42.7%, which will discourage users from employing this model. This shows the effectiveness of NTL in applicability authorization. 5 CONCLUSION AND FUTURE WORK In this paper, we propose Non-Transferable Learning (NTL), a novel training approach that can restrict the generalization ability of deep learning models to a specific data domain while degrading the performance in other domains. With the help of a generative adversarial augmentation framework, NTL is effective both in the presence and absence of target domains. Extensive experiments on 5 digit recognition datasets, CIFAR10 & STL10 and VisDA demonstrate that the ownership of models trained with NTL can be easily verified, and the verification is resistant to state-of-art watermark removal approaches. Moreover, with the training of NTL, model owners can authorize the model applicability to a certain data domain without worrying about unauthorized usage in other domains. In future work, it would be interesting to extend NTL to other tasks beyond image classification, e.g., semantic segmentation, object detection/tracking, and natural language processing (NLP) tasks. For tasks where the input data is not images, generating augmentation data would require different methods and could be challenging. Another possible future direction is Multi-Task Learning, where we could explore whether it is possible to restrict the model generalization ability to certain tasks, for instance, we think it might be useful in some cases to restrict the language model to certain tasks. Moreover, yet another interesting direction could be to combine cryptography with NTLbased verification and authorization. ACKNOWLEDGEMENT We gratefully acknowledge the support by National Science Foundation grants 1834701, 1724341, 2038853, 2016240, Office of Naval Research grant N00014-19-1-2496, and research awards from Facebook, Google, PlatON Network, and General Motors. ETHICS STATEMENT In this paper, our studies are not related to human subjects, practices to data set releases, discrimination/bias/fairness concerns, and also do not have legal compliance or research integrity issues. Non-Transferable Learning is proposed to address the shortcomings of current learning model in intellectual property protection. However, if the model trainer themselves is malicious, they may utilize NTL for harmful purposes. For example, a malicious trainer could use NTL to implant backdoor triggers in their model and release the model to the public. In addition, recently, there are domain adaptation (DA) works on adapting the domain-shared knowledge within the source model to the target one without access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020; Wang et al., 2021). However, if the source model is trained with NTL, we believe that these DA approaches will be ineffective. In other words, our NTL can be regarded as a type of attack to those source-free DA works. REPRODUCIBILITY STATEMENT The implementation code can be found in https://github.com/conditionWang/NTL. All datasets and code platform (PyTorch) we use are public. In addition, we also provide detailed experiment parameters and random seeds in the Appendix. SUMMARY OF THE APPENDIX This appendix contains additional details for the ICLR 2022 article “Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization”, including mathematical proofs, experimental details and additional results. The appendix is organized as follows: • Section A introduces the theoretical proofs for Proposition 1, Theorem 1 and Theorem 2. • Section B provides additional implementation settings, including the network architectures (Sec- tion B.1) and hyper parameters (Section B.2). • Section C provides additional experimental results, including the augmentation data of other datasets (Section C.1), the model authorization results on CIFAR10 & STL10 and VisDA (Section C.2), the experiments of VisDA on VGG-19 (Section C.3), and the error bars of main experiment results (Section C.4). Section C.5 provides the experiment result of different kernel widths. • In Section D, we discuss possible attacks that can be constructed based on our proposed method. Note that NTL used in this appendix is the abbreviation of Non-Transferable Learning. A THEORY PROOFS A.1 PROOF Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (7) Proof: According to Proposition 3.1 in (Achille & Soatto, 2018), there is a Markov Chain: (y,n) → x → z. This chain describes the information flow starting from the ground truth knowledge (label y and nuisance n) of input x to extracted representation z. In this case, the information flows from (y,n) to x then to z. The Data Processing Inequality (DPI) for a Markov Chain can ensure the relation that I(z;x) ≥ I(z;y,n). And with the chain rule, we have I(z;y,n) = I(z;n) + I(z;y|n). Thus, we can obtain I(z;x) ≥ I(z;n) + I(z;y|n). ■ Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. Proof. Suppose that the information flow in classifier Ω of a representation model follow a Markov chain z → ŷ → y, in this case, let’s apply Data Processing Inequality, we have I(z;y) ≤ I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] (8) Because the input data of this representation model is balanced across classes, we suppose that both y and ŷ are drawn from the uniform distribution PUY with K equally likely possibilities. Moreover, though the distribution of ŷ might change a little during training, we can assume it won’t become very biased since the balance of input data. For both ŷ and y, they are vectors with K dimensions. In PyTorch implementation, the computation of KL divergence loss regards there is a scalar random variable corresponding to each vector and every dimension within the vector as an observation of this variable. In this case, the loss between ŷ and y forms like DKL(P(ŷ)∥P(y)) = ∑K i=1 ŷi · log ŷi yi . It is easy to obtain that DKL(P(ŷ)∥P(y)) is non-negative, and if and only if ŷ = y, the KL divergence loss hits the minimum value DKL(P(ŷ)∥P(y)) = 0. While ŷ and y are scalar random variables that equal to the dimension index with the maximum value of ŷ and y, respectively. Therefore, it’s easy to conclude that the probability of ŷ = y will decrease with the increase of DKL(P(ŷ)∥P(y)), i.e., DKL(P(ŷ)∥P(y)) ↑⇒ P(ŷ,y) ↓. For Eq. (8), we can derive it deeper I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] = ∑ y P(y) · ∑ ŷ P(ŷ,y) P(ŷ) log P(ŷ,y) P(ŷ) · P(y) (9) Here, both P(ŷ) and P(y) are uniform distributions PUY , and we have assumed P(ŷ) won’t become very biased at the beginning of this proof. As a result, we regard P(ŷ) and P(y) nearly unchanged after training. In addition, P(ŷ,y) decreases with the increase of DKL(P(ŷ)∥P(y)). Furthermore, we can easily calculate that ∂I(ŷ;y)∂P(ŷ,y) < 0. In this case, I(ŷ;y) decreases with the increase of DKL(P(ŷ)∥P(y)), and the same as I(z;y) since I(ŷ;y) is the upper bound. ■ Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (10) Proof. According to the definition of Shannon Mutual Information, we have I(z;n) = En∼P(n)DKL(P(z|n)∥P(z)) = En∼P(n)Ez∼P(z|n) log P(z|n) P(z) (11) And because two domains have the same number of samples, we can have n conforms P(n) ∼ {P(0)=0.5, P(1)=0.5}, Eq. (11) can re-written as I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) (12) Next, we denote the probability density function (PDF) of PZ|0 and PZ|1 as p(z) and q(z), respectively. Moreover, according to the law of total probability, the PDF of distribution PZ is PDF(PZ) = 0.5p(z) + 0.5q(z), then we have I(z;n) = 0.5 ∫ +∞ −∞ p(z) · log 2p(z) p(z) + q(z) dz + 0.5 ∫ +∞ −∞ q(z) · log 2q(z) p(z) + q(z) dz (13) Subsequently, we denote expectations and variances of PZ|0 and PZ|1 as (µ0, σ0) and (µ1, σ1), respectively. With the assumption that samples of each domain are symmetrically distributed about the centroid, we have pm = max{p(z)} = p(µ0) and qm = max{q(z)} = q(µ1). Thus if we use two variables f and g to denote PDF-s of PZ|0 and PZ|1, i.e., f = p(z) and g = q(z), in this case, we have f ∈ (0, pm] and g ∈ (0, qm]. Based on the above analysis and notations, we can split Eq. (13) into 4 terms as follows, I(z;n) = 0.5 ∫ pm 0 f · log 2f f + g df + 0.5 ∫ pm 0 f · log 2f f + g′ df + 0.5 ∫ qm 0 g · log 2g f + g dg + 0.5 ∫ qm 0 g · log 2g f ′ + g dg (14) here the superscript ′ indicates the right side of f and g. Next, let us consider the Gaussian Kernel estimator MMD(PZ|0,PZ|1; exp). The estimator consists of 3 terms, and we can easily conclude that e−∥z−z ′∥2 decreases with the increase of ∥z − z′∥2. Note that ‘MMD(PZ|0,PZ|1; exp) increases to saturation’ means at least one term increases while the other two terms remain unchanged or increased. For next proof, we need to mention Theorem 2 in (Sriperumbudur et al., 2009). Theorem 2 (Sriperumbudur et al., 2009). Suppose {(Xi, Yi)}Ni=1, Xi ∈ M,Yi ∈ {−1,+1}, ∀i is a training sample drawn i.i.d. from µ. Assuming the training sample is separable, let fsvm be the solution to the program, inf{∥f∥H : Yif(Xi) ≥ 1,∀i}, where H is an RKHS with measurable and bounded kernel k. If k is characteristic, then 1 ∥fsvm∥H ≤ γk(P,Q) 2 (15) where P := 1d ∑ Yi=+1 δXi , Q := 1d ∑ Yi=−1 δXi , d is the sample quantity and δ represents the Dirac measure. This theorem provides a bound on the margin of hard-margin SVM in terms of MMD. Eq. (15) shows that a smaller MMD between P and Q enforces a smaller margin (i.e., a less smooth classifier, fsvm, where smoothness is measured as ∥fsvm∥H). Besides, the Gaussian kernel is a measurable and bounded kernel function in an RKHS. According to this theorem and the nature of hard-margin SVM, we can easily obtained that variances σ0, σ1 of PZ|0 and PZ|1 are decreasing and the difference between expectations µ0 and µ1 is increasing with the saturated increase of MMD, and this conclusion can be found in (Jegelka et al., 2009). In the following, we will prove that I(z;n) will increase when the difference between µ0 and µ1 increases. Due to the symmetry of PDFs, both f and g increase at the left side of their own expectation and decrease at the right side. Without the loss of generality, we assume that µ0 locates at the left side of µ1. For the first term of Eq. (14) which corresponds to the left interval of f , the value of g is smaller than that of the case before increasing the difference between µ0 and µ1. Thus this term will increase when the difference between µ0 and µ1 increase. As for the right interval of f , the maximum value of f + g′ (in the neighborhood of µ1) comes later than that of case before increasing the difference, besides, the maximum value is also smaller. There, for the second term of Eq. (14), it will also increase with the increase of difference between µ0 and µ1. Similarly, the integration of g follows the same trend with that of f . In this case, I(z;n) will increase when the difference between µ0 and µ1 increases. Next, we will prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Without the loss of generality, we assume the variance of PZ|0 decreases while the variance of PZ|1 remain unchanged. For the PDF of a distribution, if the variance decreases, the maximum value of PDF will increase, and there will be two points of intersection with the same value between the PDF-s. Such conclusions are easily proved since the integration of a PDF is always 1. We denote the new maximum value of f as p′m, and the value of two points of intersection as p=. In addition, during the saturated increase of MMD, we can always find a pair of µ0 and µ1 that enables the left side of g to intersect with the right side of f . With the notations in Figure 5. We can change Eq. (14) into Terms of f = ∫ p= 0 f · log 2f f + g df + ∫ p′m p= f · log 2f f + g df + ∫ p′m pm f · log 2f f + g′ df + ∫ pm p= f · log 2f f + g′ df + ∫ p= pµ1 f · log 2f f + g′ df + ∫ pµ1 0 f · log 2f f + g′ df (16) Terms of g = ∫ q1= 0 g · log 2g f + g dg + ∫ q2= q1= g · log 2g f + g dg + ∫ qm q2= g · log 2g f + g dg + ∫ qm 0 g · log 2g f ′ + g dg (17) According to the decrease of σ0, we can conclude: the 1st, 4th, 6th terms of Eq. (16) and the 2nd term of Eq. (17) decrease, while the rest terms of Eq. (16) and Eq. (17) increase. For next proof, we denote a new function, R(f, g) = f · log 2ff+g , and we can get its first-order derivative is ∂R∂f = log 2f f+g + g f(f+g) . We can easily obtain that ∂R ∂f > 0 when f > g. According to these analysis, we can get that the decrease of the 1st term of Eq. (16) can be offset by the added increase of the 5th term of Eq. (16) and the 3rd term of Eq. (17); the decrease of the 4th term of Eq. (16) can be offset by the 2nd term of Eq. (16); the decrease of the 6th term of Eq. (16) can be offset by the 4th term of Eq. (17); the decrease of the 2nd term of Eq. (17) can be offset by the 3rd term of Eq. (16). Moreover, such offsets are overcompensation. In this case, we can prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Considering the combination of the above two cases, if the difference between expectations increases and variances of two distributions decrease, the mutual information will increase. ■ A.2 OBSERVE THE MUTUAL INFORMATION We follow the similar process of (Achille & Soatto, 2018) to observe the change of mutual information I(z;n). To be specific, let Θ be a binary classifier that given representation z and nuisance n tries to predict whether z is sampled from the distribution of one domain PZ|0 or another domain PZ|1. According to (Sønderby et al., 2016), if we train Θ with the loss of Ez∼PZ|0(logΘ(z)) + Ez∼PZ|1(log 1−Θ(z)), there is always a Bayes-optimal Θ∗, Θ∗ = P(z|n = 0) P(z|n = 0) + P(z|n = 1) (18) With Eq.(12), if we assume the Θ0,Θ1 trained with Ez∼PZ|0(logΘ0(z))+Ez∼PZ|1(log 1−Θ0(z)) and Ez∼PZ|1(logΘ1(z)) + Ez∼PZ|0(log 1−Θ1(z)), respectively, are close to the optimal ones Θ∗0,Θ ∗ 1, we have I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) = 0.5Ez∼P(z|n=0) log 2Θ0(z) + 0.5Ez∼P(z|n=1) log 2Θ1(z) (19) With this approximation, we train Θ0 and Θ1 for the model of every NTL training round, and we get the curve of I(z;n) shown in Figure 6 (MNIST). According to the figure, I(z;n) is increasing during the overall training process, which is consistent with our intention. B IMPLEMENTATION SETTINGS B.1 NETWORK ARCHITECTURE To build the classification models, we use several popular architectures as the bottom feature extractor and attach them with fully-connected layers as the top classifier, which are shown in Table 5. Specifically, the backbone network of digits is VGG-11, that of CIFAR10 & STL10 is VGG-13, and we use both ResNet-50 and VGG-19 for VisDA. The classifiers of all models are the same, i.e., 3 linear layers with ReLU and dropout. As for the GAN in the augmentation framework, the generator G is made up of 4 ConvTranspose blocks and 2 Residual blocks, and the discriminator D consists of a feature extractor with 4 convolution layers, a binary classifier and a multi-class classifier. These two classifiers are composed of sequential fully-connected layers and share the same representations extracted from the front extractor. The detailed architecture is shown in Table 6 and 7. B.2 HYPER PARAMETERS Scaling factors and upper bounds. As introduced in Section 3.1 of the main paper, there are two scaling factors (α, α′) that control the trade-off between the maximization of I(z;n) and the sufficiency property of the source domain. Here, we conduct experiments using different values (α = 0.01, 0.05, 0.10, 0.20, 0.50 and α′ = 0.01, 0.05, 0.10, 0.20, 0.50), and evaluate their impact to the performance of NTL. For Target-Specified NTL, we select the combination of MNIST→USPS, STL10→CIFAR10 and VisDA-T→VisDA-V. For Source-Only NTL, we choose MNIST→Non-S, STL10→Non-S and VisDA-T→Non-S as the representatives to carry out experiments. The results are presented in Tables 8 and 9. It is easy to conclude that NTL can work effectively with different scaling factors. As for the upper bounds (β, β′), we set them for the sake of preventing the auxiliary domain loss and the MMD distance from dominating the optimization objective, affecting the convergence of training. Training parameters. For the optimization of NTL, we utilize Adam as the optimizer, with learning γ = 0.0001 and batch size of 32. For all datasets, we randomly select 8,000 samples from their own training sets as the source data, and 1,000 samples from their own testing sets as the test data (if a dataset does not have test set, we select its test data from the training set without overlapping with the chosen 8,000 source samples). And the sample quantities of the source and auxiliary domain are always the same. In the training of adversarial augmentation, the optimizer is also Adam, and we set the learning rate to γ = 0.0002 with two decay momentums 0.5 and 0.999. The batch size is 64, and the dimension of the latent space fed to the generator is 256. B.3 TRIGGERING AND AUTHORIZATION PATCH As mentioned in Section 4.1 and 4.2 of our main paper, we attach a patch on the data to utilize NTL for ownership verification and usage authorization. We create the patch in a simple way. Specifically, for the pixel of i-th row and j-th column in an RGB image, if either i or j is even, then a value of v is added to the R channel of this pixel (the channel value cannot exceed 255). Intuitively, the patch is dependent on pixel values of each image. Thus the changes of feature space brought by the attachment of these patches for various images are not the same. In our experiments, if the content of the image is simple, e.g., MNIST, USPS and SVHN, the v with a small value can shift the feature space sufficiently, but for more complicated images, we have to increase v to enable source images attached with and without the patch differentiable. Specifically, we pick the value as follows: MNIST, USPS, SVHN (v = 20); MNIST-M, SYN-D, CIFAR10, STL10 (v = 80); VisDA (v = 100). As mentioned in the main paper, we will explore the unforgeability and uniqueness of patch generation in the further work. B.4 IMPLEMENTATION OF WATERMARK REMOVAL APPROACHES In the Section 4.1 of the main paper, we implement 6 model watermark removal approaches to verify the effectiveness of NTL-based ownership verification. Here, we introduce how to implement these approaches. FTAL (Adi et al., 2018) is an approach that fine-tunes the entire watermarked model using the original training data. To implement it, we use 30% of training set that has been learned by NTL to fine-tune the entire model. When using RTAL (Adi et al., 2018), the top classifier is randomly initialized before fine-tuning. In our experiments, we load the feature extractor of the model trained with NTL and randomly initialize a classifier to attach on the extractor, and then use 30% of the training set to fine-tune this combined model. As for EWC (Chen et al., 2019), we use the code of (Chen et al., 2019) to compute the fisher information of network parameters and adjust the learning rate of fine-tuning. The data used by EWC is also 30% of the training set. Finally, AU (Chen et al., 2019) utilizes the watermarked model to pseudo label additional unlabeled samples from other similar domains, and these samples will be used to fine-tune the model together with the original training set. Following this principle, we use 30% of the training set and the same quantity of unlabeled samples from other domains (the proportion ratio between these two parties is 1:1) to fine-tune the model trained with our NTL. We conduct all fine-tuning methods for 200 epochs. For the watermark overwriting, we overwrites a new backdoor-based watermark (Zhang et al., 2018) on the model trained with NTL. Specifically, we attach a white corner (3 × 3) as the backdoor trigger to 1/15 of the training set, and follow the training approach of (Zhang et al., 2018) to write the watermark on the model. In addition, similar to other watermarking works (Rouhani et al., 2018), we also test if NTL-based verification is resistant to model pruning, and apply a layer-wise pruning method (Han et al., 2015) to prune 70% parameters of the model trained with NTL. C ADDITIONAL EXPERIMENTAL RESULTS C.1 AUGMENTATION DATA OF OTHER DATASETS In the main paper, we present the augmentation data of MNIST, and in this section, we include the augmentation data of other datasets as follows: Figure 7 for USPS, Figure 8 for SVHN, Figure 9 for MNIST-M, Figure 10 for SYN-D, Figure 11 for CIFAR10, Figure 12, Figure 13 for VisDA-T. C.2 MODEL USAGE AUTHORIZATION ON CIFAR10 & STL10 AND VISDA Here we present the experiment of authorizing the model usage on CIFAR10 & STL10 and VisDA, shown in Table 10. According to the results, the model performs well on the data attached with the authorized patch and has bad performance on all other samples. C.3 ADDITIONAL RESULTS OF VISDA ON VGG-19 To demonstrate the effectiveness of NTL on different network architectures, we also carry out experiments of VisDA on VGG-19. All other settings are the same as before, and the results are shown in Table 11. We can easily see that the performance is consistent with the aforementioned other experiments, which shows the wide applicability of NTL. C.4 ERROR BAR We conduct all experiments with three random seeds (2021, 2022, 2023), and present the error range in this section. Table 12 is the error range of Target-Specified NTL corresponding to Table 1 of the main paper; Table 13 presents the error of experiments on Source-Only NTL corresponding to Table 3 of the main paper; Table 14 shows the error of model authorization which is presented as Table 4 in our main paper. C.5 THE IMPACT OF GAUSSIAN KERNEL BANDWIDTH In our implementation, we utilize a series of Gaussian kernels to approximate MMD, which is implemented as the MK-MMD in Long et al. (2015). Specifically, the bandwidth in our used kernels is controlled by two parameters mul and num (we use mul = 2.0, num = 5 in our experiments presented in the main text of the paper). The bandwidth of these kernels is as follows: B = { ∥x1 − x2∥2 · muli−⌊num/2⌋ (n2 − n) }num−1 i=0 (20) where x1 and x2 are two input data batches with size n, and ⌊·⌋ extracts the integer part of the input. To investigate the impact of kernel bandwidth, we select a series of mul-s and num-s to conduct Source-Only NTL experiments on MNIST, CIFAR10 and VisDA-T, and the results are shown in Table 15. According to the results, we can observe that the performance difference between the source and target is nearly the same with different mul-s and num-s. As these two parameters directly determine the kernel bandwidth, these results demonstrate that the kernel bandwidth does not have a significant impact on NTL performance. D POSSIBLE ATTACKS BASED ON NTL Although we propose the Non-Transferable Learning for protecting the Intellectual Property in AIaaS, if the model owner is malicious, they can also utilize NTL to poison or implant backdoor triggers to their model evasively and release the model to the public. In the setting of applying Target-Specified NTL to verify the model ownership, the patch we used can also be regarded as a trigger for certain misclassification backdoor. From the results of ownership verification in the main paper, we can see the possibility of launching NTL-based target backdoor attacks. As for the case of Source-Only NTL, our objective is shaped like an universal poison attack by restricting the generalization ability of models. The results in our main paper demonstrate the feasibility of this poison attack. In addition, recently, there are more domain adaptation (DA) works about adapting the domain-shared knowledge within the source model to the target one without the access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020). However, if the source model is trained with Source-Only NTL, we believe that these DA works will be ineffective. In other words, our NTL can be regarded as a type of attack to these source-free DA works.
1. What is the main idea introduced by the paper, and what are its potential applications? 2. How does the proposed approach differ from traditional domain adaptation methods? 3. What are the strengths and limitations of the proposed technique, particularly regarding its ability to generalize to unseen domains? 4. Are there any potential ethical concerns or risks associated with using non-transferable learning, such as increased vulnerability to privacy attacks? 5. Can the ideas presented in the paper be applied to other types of models beyond image classification, such as natural language processing or language models?
Summary Of The Paper Review
Summary Of The Paper This paper introduces the idea of "non-transferable learning", which is roughly what the name indicates. The authors explain the value of this as a security/IP protection tool to protect the model from being used on unauthorized data. In addition, this presents a kind of attack against domain adaption works that try to improve generalization bounds without access to source data. Review Basically, the authors design a clever technique for learning nuisance-dependent representations. Such a representation can be made to perform accurately for a particular source domain, but poorly for another target domain. Furthermore, the authors design a GAN type technique for generating samples outside the source domain to serve as a kind of generic target domain. This is obviously important, as one cannot know to which target domain the model would be later adapted to. This is a very interesting paper, although I have to say I'm not an expert in this topic at all. Most of the paper is really nicely written and is pretty easy to follow. The experimental verification is clear and detailed, but mostly limited to small images, so it's hard to say how it actually performs in some real-life scenarios. Couple questions come to mind: Can you imagine uses of this to other kinds of models, e.g., language models, or is this mainly meaningful for image data? It sounds like an NTL representation by nature is highly vulnerable to training data privacy attacks, like membership inference. Have you considered if one could use the NTL representation to particularly efficiently generate samples from (something close to) the training data distribution?
ICLR
Title Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization Abstract As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to stateof-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets. 1 INTRODUCTION Deep Learning (DL) is the backbone of Artificial Intelligence as a Service (AIaaS) (Ribeiro et al., 2015), which is being provided in a wide range of applications including music composition (Briot et al., 2020), autonomous driving (Li et al., 2021a), smart building (Xu et al., 2020a), etc. However, a good model can be expensive to obtain: it often requires dedicated architecture design (He et al., 2016), a large amount of high-quality data (Deng et al., 2009), lengthy training on professional devices (Zoph & Le, 2016), and expert tuning (Zhang et al., 2019). Thus, well-trained DL models are valuable intellectual property (IP) to the model owners and need protection. Generally speaking, there are two aspects in protecting an IP in AIaaS, verifying who owns the model and authorizing how the model can be used. These two aspects led to the development of two types of protection techniques: ownership verification and usage authorization. For ownership verification, prior works proposed approaches such as embedding watermarks into network parameters (Song et al., 2017), learning special behaviors for pre-defined triggers (Fan et al., 2019), and extracting fingerprints from the model (Le Merrer et al., 2020). However, they are vulnerable to state-of-art watermark removal approaches that are based on model fine-tuning or retraining (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). For model usage authorization, most prior works were built on encrypting neural network parameters with a secret key (Alam et al., 2020; Chakraborty et al., 2020) and ensuring that models can only be used by users with this key. However, authorized users may use the model on any data without restriction. We believe that for comprehensive IP protection, the goal of usage authorization is not *These authors contributed equally to this work. Data space Reduced model generalization bound Source domain Source-Only NTL Source domain Outside of source domain Data space Target Target-Specified NTL Source domain Generalization bound Data space Supervised Learning Figure 1: A visualization of the generalization bound trained with different approaches. The left figure shows Supervised Learning in the source domain, which can derive a wide generalization area. When Target-Specified NTL is applied (middle), the target domain is removed from the generalization area. As for Source-Only NTL (right), the generalization area is significantly reduced. only who is allowed to use the model, but also what data can the model be used on. We thus consider a new data-centric aspect of usage authorization in this work, i.e., authorizing models to certain data for preventing their usage on unauthorized data. We call this applicability authorization. Note that applicability authorization goes far beyond IP protection. It can also be viewed as a way to “control” how machine learning models are used in general. One example would be a company (e.g., Meta) trains a recommendation system from adult data and uses applicability authorization to prevent this system from being used by teenagers. Our Approach and Contribution. In this work, we propose Non-Transferable Learning (NTL), a novel approach that can robustly verify the model ownership and authorize the model applicability on certain data. Intuitively, NTL goes against the current research trend of improving the generalization ability of models across various domains, e.g., domain generalization and adaptation (Zhou et al., 2020; Dong et al., 2020). Instead, NTL tries to make the generalization bound of DL models more explicit and narrower, by optimizing the model to learn domain-dependent features and thereby making the model exclusive to certain domains. More specifically, we consider two domains: the source domain where we want the models to perform well, and the auxiliary domain where we aim to degrade the model performance. And if the model trained with NTL is applied to a target domain similar to the auxiliary one, the performance should also be poor. As shown in Figure 1, we have developed two types of NTL approaches: Target-Specified NTL and Source-Only NTL. • Target-Specified NTL assumes that the source and target domains are both known. We then treat the target domain as the auxiliary domain and enlarge the distance of representations between the source and auxiliary domains. Target-Specified NTL can be used to verify the model ownership by triggering misclassification. While previous model watermarks can often be easily removed because the model memorization of such watermarks encounters catastrophic forgetting (Kemker et al., 2018) during watermark removal, our NTL-based verification is resistant to state-of-art watermark removal approaches, because the misclassification behavior is dependent on the overall target-private features that have little correlation with the source-private features for the main task. • In Source-Only NTL, the target domain is unknown and thus our approach relies solely on the source domain, aiming to degrade the performance in all other domains. In this case, NTL generates the auxiliary domain from a novel generative adversarial augmentation framework and then increases the representation distance. Source-Only NTL can provide authorization to certain data rather than particular users or devices, by degrading the model performance on all other data domains other than the source domain. This provides data-centric applicability authorization, with which we can also prevent unauthorized model usage that are caused by the secret key leakage and cannot be addressed by prior model authorization methods. In addition to proposing the novel concept of NTL and developing its two approaches, we are also able to experimentally validate their effectiveness. We conducted extensive experiments on 5 digit sets, CIFAR10 & STL10 and VisDA. For target-specified cases, we demonstrate how to apply NTL for model ownership verification. Our experiments show that the state-of-art model watermark removal methods are ineffective on NTL-based ownership verification. For source-only NTL, our experiments demonstrate its effectiveness in authorizing model applicability to certain data. 2 RELATED WORK Domain Generalization & Adaptation (DG & DA). DG aims to generalize learning models with available source domains to unseen target domains (Blanchard et al., 2011). A number of methods have been proposed for domain discrepancy minimization (Li et al., 2020), adversarial training (Rahman et al., 2020; Zhao et al., 2020c), invariance representation learning (Zhou et al., 2020; Piratla et al., 2020), etc. Recently, there is significant interest on conducting DG with one source domain only, for which well-crafted data augmentation approaches (Qiao et al., 2020; Zhao et al., 2020b; Li et al., 2021b; Xu et al., 2020b) have been proposed to expand the input space. DA is also related to improving the generalization ability of models across domains (Ahmed et al., 2021), and while DA can access the target data, DG has no access to any target sample (Xu et al., 2021; Dong et al., 2021). Unlike DG or DA, we try to weaken the generalization ability of models by expanding the distance between representations of different domains. Our method works effectively for both the target-specified and the source-only cases with a novel adversarial augmentation framework. Intellectual Property (IP) Protection for Deep Learning (DL). While DL has shown its unparalleled advantages in various applications, there are significant challenges in protecting DL models. For instance, Inference Attack (Shokri et al., 2017; Wang et al., 2019) can steal private information about the target DL model. Model Inversion Attack (He et al., 2019; Salem et al., 2020) is able to recover the input data via an analysis of the model prediction. These two types of attacks directly threaten the privacy of model users, while there are also many active attacks (Suciu et al., 2018; Yao et al., 2019) that lead DL models to produce abnormal behaviors. In addition, verifying model ownership and authorizing model usage have become important issues with the development of AIaaS. There have been a number of watermarking approaches addressing the verification of model ownership. For instance, Zhang et al. (2018) and Li et al. (2019) train a neural network on the original datasets and the watermarked one assigned with a particular label, which makes the model behave abnormally when it encounters watermarked data. Song et al. (2017) and Uchida et al. (2017) inject a pattern that is similar to regular photograph watermarks (Cheng et al., 2021) into the least significant bits of the model parameters and provide the corresponding decoding methods. Le Merrer et al. (2020) and Zhao et al. (2020a) make use of adversarial examples to extract fingerprints from learned neural networks without accessing network weights. Compared to these approaches, our NTL can achieve model ownership verification by triggering universal misclassification. Moreover, with extensive experiments, we also demonstrate that state-of-art model watermark removal methods, e.g., FTAL and RTAL (Adi et al., 2018), EWC and AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018) are not effective to NTL-based verification. Model usage authorization is another aspect in protecting model intellectual property. For instance, Alam et al. (2020) encrypt every network parameter with a secret key. Chakraborty et al. (2020) generate a secret key from hardware fingerprints of a particular device, and require that only users who possess this device can load and employ the model. Different from these methods, our NTL focuses on providing data-centric protection via applicability authorization, which retains good model performance on authorized data while degrading model performance for other data domains. To the best of our knowledge, this is the first work that prevents model usage on unauthorized data via model learning. 3 METHODOLOGY In this section, we introduce our NTL approach. Section 3.1 presents the inspiration and the design of the optimization objective of NTL, which is the core for both target-specified and source-only cases. Section 3.2 presents the generative augmentation framework for source-only cases. Our method is based on the concept of generative adversarial networks (GAN), however our goal is not to propose a new GAN but to design an effective augmentation method in the context of NTL. Section 3.3 introduces the application of NTL on ownership verification and applicability authorization. 3.1 NON-TRANSFERABLE LEARNING WITH DISTANCE EXPANSION OF REPRESENTATION We consider a source domain with labeled samples S= {(x,y)∥x∼PSX ,y∼PSY }, where PX and PY are the input and label distributions, respectively. In this work, we use image classification as the learning task with K possible classes, in which case x and y are matrix-valued and scalar random variables, respectively. In addition, we consider an auxiliary domain A={(x,y)∥x∼PAX ,y∼PAY }. The source domain S and the auxiliary domain A will be fed into a deep neural network, and without loss of generality, we split the neural network into two parts, one is a feature extractor Φ on the bottom, and the other is a classifier Ω on the top. Inspiration from Information Bottleneck. Our NTL, in particular the design of optimization objective, is inspired by the analysis of Information Bottleneck (IB) (Tishby et al., 2000). Let us start by introducing Shannon Mutual Information (SMI). In addition to input x and label y, we also regard representation z extracted by Φ as a random variable. The SMI between two random variables, e.g., between z and x, is defined as I(z;x)=Ex∼PX [DKL(P(z|x)∥P(z))], where DKL(·) represents the Kullback-Leible (KL) divergence and P(·) is the distribution. In IB theory, considering the effectiveness, privacy and generalization, an optimal representation has three properties (Achille & Soatto, 2018): (1) Sufficiency: label y sufficiently differentiates representation z, i.e., I(z;y)=I(x;y); (2) Minimality: z needs to represent as little information about input x as possible, i.e., min I(z;x); (3) Invariance: z is optimal, meaning that it does not overfit to spurious correlations between y and nuisance n embedded in x, i.e., I(z;n) = 0. IB theory assumes that nuisance n is a factor that affects input x, and it works with y together to determine what x looks like to some extent. For instance, in domain generalization, nuisance n can be regarded as a domain index that indicates which domain a certain sample comes from (Du et al., 2020). In our problem, different from the objective of the IB theory, NTL enforces the models to extract nuisance-dependent representations, which is opposite to the property of invariance. In other words, we aim to increase I(z;n), and we have the following proposition for achieving this aim. Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (1) The detailed proof for Proposition 1 is included in the Appendix. Optimization Objective Design. Proposition 1 provides guidance for maximizing I(z;n). First, unlike in the IB theory, we do not minimize I(z;x) for the minimality property. In addition, we try to minimize I(z;y|n) through the design of optimization objective that measures the error between the model prediction and the ground truth during the training of neural networks. Specifically, instead of using the typical CrossEntropy loss to measure the error, we apply KL divergence loss to direct the training, and we have the following theorem. Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. The detailed proof of Theorem 1 is provided in the Appendix. According to this theorem, I(z;y|n) can be minimized by increasing the KL divergence loss of training data conditioned on different n. However, as stated in Section 1, we aim to degrade the model performance in the auxiliary domain while maintaining good model performance in the source domain. Thus, we only minimize I(z;y|n) by increasing the KL divergence loss of the auxiliary domain data. In order to achieve this goal, we design a loss L∗ntl that shapes like a minus operation between KL divergence losses of the source and auxiliary domain (LS , LA), i.e., LS = Ex∼PSX [DKL(P(Ω(Φ(x)))∥P(y))] and LA=Ex∼PAX [DKL(P(Ω(Φ(x)))∥P(y))]. Specifically, this loss can be written as follows: L∗ntl = LS −min(β, α · LA) (2) Here, α is the scaling factor for LA (α = 0.1 in our experiments), and β is an upper bound when LA gets too large and dominates the overall loss (β=1.0 in experiments; please see the Appendix for more details about α and β). Moreover, if we use n = 0 and n = 1 to denote the source and auxiliary domain respectively, the optimization of Eq. (2) can guarantee the sufficiency property for the source domain: I(z;y|n=0)=I(x;y|n=0), and increasing LA decreases I(z;y|n=1). According to Proposition 1, we can move the upper bound of I(z;n) to a higher baseline via optimizing Eq. (2). However, such optimization might only make classifier Ω more sensitive to domain features and have little effect on feature extractor Φ. In this case, representations of different domains captured by Φ may still be similar, which conflicts with our intention to maximize I(z;n), and the performance of the target can be easily improved by fine-tuning or adapting Ω with a small number of labeled target samples. On the other hand, directly calculating I(z;n) and taking it as a part of the optimization objective are difficult, especially in the optimization of representation learning (Torkkola, 2003). Achille & Soatto (2018) apply binary classifier as the nuisance discriminator, and they can estimate I(z;n) after the model training via this discriminator. Here, we find another way to increase I(z;n) indirectly based on the following theorem. Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (3) We also employ a nuisance discriminator to observe the change of I(z;n) during training. The details of this discriminator design and the proof of Theorem 2 can be found in the Appendix. NTL Optimization Objective. Based on the above analysis, we design our NTL optimization objective to increase I(z;n) and extract nuisance-dependent representations. Specifically, we compute the MMD(P,Q; exp) between representations of the source and auxiliary domain data and maximize it. For stability concern, we also set an upper bound to the MMD(P,Q; exp). Then, the overall optimization objective of NTL with distance expansion of representation is shaped as follows: Lntl = LS −min(β, α ·LA ·Ldis), whereLdis = min(β′, α′ ·MMD(Px∼PS X (Φ(x)),Px∼PA X (Φ(x)); exp) (4) Here, α′ and β′ represent the scaling factor and upper bound of Ldis respectively (α′ = 0.1 and β′=1.0 in our experiments; please refer to the Appendix for more details about α′ and β′). Φ(·) is the feature extractor that outputs the corresponding representations of given inputs. When the target domain is known and accessible, it will be regarded as the auxiliary domain, and the above NTL with distance expansion of representation can be conducted directly on the source and auxiliary domains. We call such cases Target-Specified NTL. 3.2 SOURCE DOMAIN AUGMENTATION FOR SOURCE-ONLY NTL In practice, the target domain might be unknown or unavailable. For such cases, we develop a novel generative augmentation framework to generate an auxiliary domain and then leverage the above NTL process, in what we call Source-Only NTL. In the following, we will introduce our augmentation framework, which can generate data samples drawn from the neighborhood distribution of the source domain with different distances and directions to serve as the auxiliary data domain in NTL. GAN Design for Source Domain Augmentation. The overall architecture of our augmentation framework is shaped like a generative adversarial network (GAN) that is made up of a generator G and a discriminator D. G takes in normal noise and a label in form of one-hot, and then outputs a data sample. For D, if we feed D with a sample, it will tell whether this sample is fake or not and predict its label. The adversarial battle happens as G tries to generate data as real as possible to fool D, while D distinguishes whether the data being fed to it is real or not to the best of its ability. After sufficient period of such battle, the distributions of the generated data and the ground-truth data are too similar to tell apart (Li et al., 2017). Based on this principle, we utilize G to approximate the source domain. However, if we follow the standard GAN training, the trained GAN will not generate samples with deterministic labels. Therefore, we combine the intuitions of CGAN (Mirza & Osindero, 2014) and infoGAN (Chen et al., 2016) to propose a new training approach for our augmentation framework. In our approach, G uses MSE loss to compare its generated data and the real data. D consists of three modules: a feature extractor and two classifiers behind the extractor as two branches, where a binary classifier predicts whether the data is real or not, and a multiple classifier outputs the label. Note that these two classifiers both rely on the representations extracted by the feature extractor. For training D, we use MSE loss to evaluate its ability to distinguish real samples from fake ones, and KL divergence to quantify the performance of predicting labels for the real data. Finally, there is an additional training step for enforcing the GAN to generate samples of given labels by optimizing G and D simultaneously. The training losses of LG , LD and LG,D are: LD = Ex∼PS X ,y∼PS Y [ 1 2 (∥Db(x), 1∥2 + ∥Db(G(noise,y)), 0∥2) +DKL(P(Dm(x)) ∥P(y) ] LG = Ey∼PS Y [ ∥Db(G(noise,y)), 1∥2 ] , LG,D = Ey′∼PU Y [DKL(P(Dm(G(noise,y′))) ∥P(y′)] (5) Algorithm 1: Generative Adversarial Data Augmentation for Source-Only NTL Require: Source domain data S={(x,y)∥x ∼ PSX ,y ∼ PSY }; Generator G, discriminator D; List of augmentation distance DIS, the maximum augmentation direction DIR; GAN training epochs eGAN, augmentation training epochs eAUG; Initialize the auxiliary domain data A = [ ]; Output: The auxiliary domain data A = {(x,y)∥x ∼ PAX ,y ∼ PAY }; 1 for i = 1 to eGAN do 2 use (noise,y ∼ PSY ) to optimize G with LG , use S and G(noise,y ∼ PSY ) to optimize D with LD; 3 use (noise,y ∼ PUY ) to optimize G,D with LG,D; 4 for dis in DIS do 5 for dir to DIR do 6 for l in G do 7 interval = d(l) / DIR; // function d() acquires the dimension of inputs; 8 freeze D and l[0 : dir × interval]; // freeze D, and the first dir parts of l-th layer in G; 9 for i = 1 to eAUG do 10 use (noise,y ∼ PSY ) and S to optimize G with Laug; 11 A ∪ G(noise,y ∼ PUY ); // use G to generate augmentation data; Here, we use subscripts b and m to denote outputs from the binary classifier and the multiple classifier of D, respectively. The noise of Eq. (5) is drawn from Gaussian Noise Pg =N (0, 1), while y′ is drawn from the uniform distribution PUY with K equally likely possibilities. And y and y′ are the one-hot form vectors of y and y′, respectively. Augmentation with Different Distances. To generate the data of different distances to the source domain, we apply a Gaussian estimator to measure the MMD between distributions of the source and the generated data from G. However, providing that the MMD distance is optimized to increase with no restriction, the outcome will lose the semantic information, i.e., the essential feature for the main task. In order to preserve such semantic information, we use the CrossEntropy loss to add a restriction to the optimization objective. With this restriction, we set multiple upper bounds – DIS for generating data with different distances (we use DIS to denote a list of multiple dis-s with various lengths). The specific objective is as follows: Laug=−min { dis,MMD(Px∼PS X (Dz(x)),Py∼PS Y (Dz(G(noise,y))); exp) } + Ey∼PS Y DCE(Dm(G(noise,y)),y) (6) Here, subscript z denotes outputs from the feature extractor. For every dis, we freeze D and use Eq. (6) to optimize G. After the optimization, we can generate augmentation data via feeding G with normal noise and labels drawn from PUY . Augmentation with Different Directions. We also investigate how to generate data in different directions. The optimization of Gaussian MMD follows the direction of gradient, which is the fastest way to approach the objective. In such case, all augmented domains of different distances might follow the same direction, i.e., the direction of gradient. Therefore, in order to augment neighborhood domains with different directions, we need to introduce more restrictions to the optimization process. Specifically, for intermediate representations of G, we view each filter (neuron) as corresponding to a feature dimension of the representation. At the beginning of directional augmentation, we make multiple copies of the trained GAN in the last step (G and D), and we pick one GAN for each direction. If we want to augment the source in DIR directions, we will divide the overall network of G into DIR parts equally. For the augmentation of the first direction, the first part of G will be frozen and not updated during optimization. The second direction will be augmented by freezing the first two parts of the network and conducting the optimization. The third corresponds to the first three parts, and so on. Given a certain dis, we can optimize Eq. (6) to augment the source domain to DIR directions by freezing G gradually. The detained flow is shown in Algorithm 1. 3.3 APPLICATION OF NTL FOR MODEL INTELLECTUAL PROPERTY PROTECTION Ownership Verification. The proposed NTL can easily verify the ownership of a learning model by triggering misclassification. To achieve this, we can attach a certain trigger patch which shapes like a shallow mask for images on the source domain data as the auxiliary domain data, and then conduct NTL on these two domains. After that, the training model will perform poorly on the data with the patch but have good performance on the data without the patch. Such model behavior difference can be utilized to verify the ownership, which is similar to what regular backdoor-based model watermarking approaches do. In contrast, models trained with other methods often present nearly the same performance on the data with or without the patch, as the patch is shallow and light. Applicability Authorization. When applying NTL to authorize the model applicability, we aim to restrict the model generalization ability to only the authorized domain, where all the data is attached with a dedicated patch, and we need to make sure that the patch design will not impact the semantic information (note that the unforgeability and uniqueness of the patch are not the main consideration of this work, and we will explore it in the future). For simplicity, we use a shallow mask that is similar to that of the aforementioned ownership verification as the authorized patch. We first use our generative adversarial augmentation framework to generate neighborhood data of the original source domain. Then, we regard the source data with the patch attached as the source domain in NTL, and use the union of the original source data, the generated neighborhood data with and without the patch as the auxiliary domain. After the NTL training on these two domains, the learning model performs well only on the source domain data with the authorized patch attached, and exhibits low performance in other domains. In this way, we achieve the model applicability authorization. 4 EXPERIMENTAL RESULTS Our code is implemented in PyTorch (and provided in the Supplementary Materials). All experiments are conducted on a server running Ubuntu 18.04 LTS, equipped with NVIDIA TITAN RTX GPU. The datasets and experiment settings used are introduced below. Digits. MNIST (Deng, 2012) (MT) is the most popular digit dataset. USPS (Hull, 1994) (US) consists of digits that are scanned from the envelopes by the U.S. Postal Service. SVHN (Netzer et al., 2011) (SN) contains house number data selected from Google Street View images. MNISTM (Ganin et al., 2016) (MM) is made by combining MNIST with different backgrounds. Finally, SYN-D (Roy et al., 2018) (SD) is a synthetic dataset, which is generated by combining noisy and complex backgrounds. CIFAR10 & STL10: Both CIFAR10 and STL10 (Coates et al., 2011) are ten-class classification datasets. In order to make these two sets applicable to our problem, we follow the procedure in French et al. (2017). VisDA: This dataset (Peng et al., 2017) contains a training set (VisDA-T) and a validation set (VisDA-V) of 12 object categories. For classifying these datasets, we apply VGG-11 (Simonyan & Zisserman, 2014) for Digits Recognition, VGG-13 (Simonyan & Zisserman, 2014) for CIFAR10 & STL10, and ResNet-50 (He et al., 2016) and VGG-19 for VisDA. All networks are initialized as the pre-trained version of ImageNet (Deng et al., 2009). We use 3 seeds (2021, 2022, 2023) to conduct all experiments three times and present the average performance. Network architectures, parameters and error bars can be found in the Appendix. 4.1 TARGET-SPECIFIED NTL Effectiveness of NTL in Reducing Target Domain Performance. For digits sets and CIFAR10 & STL10, we pick all possible domain pairs to carry out experiments. As for VisDA, we regard the training set as the source domain and the validation set as the target. We include the results of standard supervised learning with KL divergence in the source domain and report the performance difference provided by using NTL. Table 1 and Figure 3 show results of digits sets, CIFAR10 & STL10, and VisDA. For NTL, we observe that the target performance of all pairs is degraded to nearly 10% with little accuracy reduction in the source domain. The largest performance degradation of the target domain, from 97.0% to 11.7%, occurs when the source is MM and the target is MT. Comparing NTL with supervised learning, the average relative performance degradation for the tar- get domain of all cases is approximately 80%. These results demonstrate that Target-Specified NTL can effectively degrade the performance in the target without sacrificing the source performance. NTL for Ownership Verification. We use a simple pixel-level mask as the trigger patch, which is shown in Figure 2 (please refer to the Appendix for more details). We use 6 state-of-art model watermark removal approaches to test the robustness of NTL-based verification: FTAL (Adi et al., 2018), RTAL (Adi et al., 2018), EWC (Chen et al., 2019), AU (Chen et al., 2019), watermark overwriting and model pruning (Rouhani et al., 2018). The settings of these methods are included in the Appendix. The results are shown in Table 2. We can see that models trained with NTL behave differently on the data with and without the patch, whereas supervised learning performs nearly the same. Furthermore, all these 6 watermark removal methods fail to improve the performance on the patched data, which indicates that NTL-based ownership verification is effective and robust to state-of-the-art watermark removal methods. 4.2 SOURCE-ONLY NTL Effectiveness of NTL in Reducing Non-source Domain Performance. For all three dataset cases, we select one domain as the source and then conduct our generative adversarial augmentation to generate the auxiliary domain. We set a series of discrete dis-s from 0.1 to 0.5 with a step of 0.1, and for each dis, we generate augmentation data of 4 directions (DIR=4). Table 3 and Figure 3 present the results of Source-Only NTL and its comparison with the supervised learning. Figure 4 is the augmentation data for MNIST (other datasets are included in the Appendix). From the results, we can clearly see that models trained with NTL perform worse on all non-source domains compared with the supervised learning, and MM-MT has the largest degradation from 97.0% to 14.7%. NTL for Applicability Authorization. Follow the implementation steps outlined in Section 3.3, we carry out experiments on all 3 dataset cases. The experiment results of digits are presented in Table 4 (the results of CIFAR10 & STL10 and VisDA are in the Appendix). From the table, we can see that the model performs very well in the authorized domain while having bad performance in all other domains (with or without the authorized patch). The highest classification accuracy of unauthorized domains is barely 42.7%, which will discourage users from employing this model. This shows the effectiveness of NTL in applicability authorization. 5 CONCLUSION AND FUTURE WORK In this paper, we propose Non-Transferable Learning (NTL), a novel training approach that can restrict the generalization ability of deep learning models to a specific data domain while degrading the performance in other domains. With the help of a generative adversarial augmentation framework, NTL is effective both in the presence and absence of target domains. Extensive experiments on 5 digit recognition datasets, CIFAR10 & STL10 and VisDA demonstrate that the ownership of models trained with NTL can be easily verified, and the verification is resistant to state-of-art watermark removal approaches. Moreover, with the training of NTL, model owners can authorize the model applicability to a certain data domain without worrying about unauthorized usage in other domains. In future work, it would be interesting to extend NTL to other tasks beyond image classification, e.g., semantic segmentation, object detection/tracking, and natural language processing (NLP) tasks. For tasks where the input data is not images, generating augmentation data would require different methods and could be challenging. Another possible future direction is Multi-Task Learning, where we could explore whether it is possible to restrict the model generalization ability to certain tasks, for instance, we think it might be useful in some cases to restrict the language model to certain tasks. Moreover, yet another interesting direction could be to combine cryptography with NTLbased verification and authorization. ACKNOWLEDGEMENT We gratefully acknowledge the support by National Science Foundation grants 1834701, 1724341, 2038853, 2016240, Office of Naval Research grant N00014-19-1-2496, and research awards from Facebook, Google, PlatON Network, and General Motors. ETHICS STATEMENT In this paper, our studies are not related to human subjects, practices to data set releases, discrimination/bias/fairness concerns, and also do not have legal compliance or research integrity issues. Non-Transferable Learning is proposed to address the shortcomings of current learning model in intellectual property protection. However, if the model trainer themselves is malicious, they may utilize NTL for harmful purposes. For example, a malicious trainer could use NTL to implant backdoor triggers in their model and release the model to the public. In addition, recently, there are domain adaptation (DA) works on adapting the domain-shared knowledge within the source model to the target one without access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020; Wang et al., 2021). However, if the source model is trained with NTL, we believe that these DA approaches will be ineffective. In other words, our NTL can be regarded as a type of attack to those source-free DA works. REPRODUCIBILITY STATEMENT The implementation code can be found in https://github.com/conditionWang/NTL. All datasets and code platform (PyTorch) we use are public. In addition, we also provide detailed experiment parameters and random seeds in the Appendix. SUMMARY OF THE APPENDIX This appendix contains additional details for the ICLR 2022 article “Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization”, including mathematical proofs, experimental details and additional results. The appendix is organized as follows: • Section A introduces the theoretical proofs for Proposition 1, Theorem 1 and Theorem 2. • Section B provides additional implementation settings, including the network architectures (Sec- tion B.1) and hyper parameters (Section B.2). • Section C provides additional experimental results, including the augmentation data of other datasets (Section C.1), the model authorization results on CIFAR10 & STL10 and VisDA (Section C.2), the experiments of VisDA on VGG-19 (Section C.3), and the error bars of main experiment results (Section C.4). Section C.5 provides the experiment result of different kernel widths. • In Section D, we discuss possible attacks that can be constructed based on our proposed method. Note that NTL used in this appendix is the abbreviation of Non-Transferable Learning. A THEORY PROOFS A.1 PROOF Proposition 1. Let n be a nuisance for input x. Let z be a representation of x, and the label is y. For the information flow in the representation learning, we have I(z;x)− I(z;y|n) ≥ I(z;n) (7) Proof: According to Proposition 3.1 in (Achille & Soatto, 2018), there is a Markov Chain: (y,n) → x → z. This chain describes the information flow starting from the ground truth knowledge (label y and nuisance n) of input x to extracted representation z. In this case, the information flows from (y,n) to x then to z. The Data Processing Inequality (DPI) for a Markov Chain can ensure the relation that I(z;x) ≥ I(z;y,n). And with the chain rule, we have I(z;y,n) = I(z;n) + I(z;y|n). Thus, we can obtain I(z;x) ≥ I(z;n) + I(z;y|n). ■ Theorem 1. Let ŷ be the predicted label outputted by a representation model when feeding with input x, and suppose that ŷ is a scalar random variable and x is balanced on the ground truth label y. Denote the one-hot forms of ŷ and y as ŷ and y, respectively. If the KL divergence loss DKL(P(ŷ)∥P(y)) increases, the mutual information I(z;y) will decrease. Proof. Suppose that the information flow in classifier Ω of a representation model follow a Markov chain z → ŷ → y, in this case, let’s apply Data Processing Inequality, we have I(z;y) ≤ I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] (8) Because the input data of this representation model is balanced across classes, we suppose that both y and ŷ are drawn from the uniform distribution PUY with K equally likely possibilities. Moreover, though the distribution of ŷ might change a little during training, we can assume it won’t become very biased since the balance of input data. For both ŷ and y, they are vectors with K dimensions. In PyTorch implementation, the computation of KL divergence loss regards there is a scalar random variable corresponding to each vector and every dimension within the vector as an observation of this variable. In this case, the loss between ŷ and y forms like DKL(P(ŷ)∥P(y)) = ∑K i=1 ŷi · log ŷi yi . It is easy to obtain that DKL(P(ŷ)∥P(y)) is non-negative, and if and only if ŷ = y, the KL divergence loss hits the minimum value DKL(P(ŷ)∥P(y)) = 0. While ŷ and y are scalar random variables that equal to the dimension index with the maximum value of ŷ and y, respectively. Therefore, it’s easy to conclude that the probability of ŷ = y will decrease with the increase of DKL(P(ŷ)∥P(y)), i.e., DKL(P(ŷ)∥P(y)) ↑⇒ P(ŷ,y) ↓. For Eq. (8), we can derive it deeper I(ŷ;y) = Ey∼PY [DKL(P(ŷ|y)∥P(ŷ))] = ∑ y P(y) · ∑ ŷ P(ŷ,y) P(ŷ) log P(ŷ,y) P(ŷ) · P(y) (9) Here, both P(ŷ) and P(y) are uniform distributions PUY , and we have assumed P(ŷ) won’t become very biased at the beginning of this proof. As a result, we regard P(ŷ) and P(y) nearly unchanged after training. In addition, P(ŷ,y) decreases with the increase of DKL(P(ŷ)∥P(y)). Furthermore, we can easily calculate that ∂I(ŷ;y)∂P(ŷ,y) < 0. In this case, I(ŷ;y) decreases with the increase of DKL(P(ŷ)∥P(y)), and the same as I(z;y) since I(ŷ;y) is the upper bound. ■ Theorem 2. Let n be a nuisance that is regarded as a domain index. n=0 and n=1 denote that a certain input x comes from two different domains. Suppose that these two domains have the same number of samples d, and the samples of each domain are symmetrically distributed around the centroid. Let z be a representation of x, and it is drawn from distribution PZ . An estimator with the characteristic kernel from Reproducing Kernel Hilbert Spaces (RKHSs) – Gaussian Kernel estimator MMD(P,Q; exp) is applied on finite samples from distributions PZ|0 and PZ|1 to approximate the Maximum Mean Discrepancy (MMD) between these two distributions. If MMD(PZ|0,PZ|1; exp) increases to saturation, the mutual information between z and n will increase. MMD(PZ|0,PZ|1; exp)=Ez,z′∼PZ|0 [e −∥z−z′∥2 ]−2Ez∼PZ|0,z′∼PZ|1 [e −∥z−z′∥2 ]+Ez,z′∼PZ|1 [e −∥z−z′∥2 ] (10) Proof. According to the definition of Shannon Mutual Information, we have I(z;n) = En∼P(n)DKL(P(z|n)∥P(z)) = En∼P(n)Ez∼P(z|n) log P(z|n) P(z) (11) And because two domains have the same number of samples, we can have n conforms P(n) ∼ {P(0)=0.5, P(1)=0.5}, Eq. (11) can re-written as I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) (12) Next, we denote the probability density function (PDF) of PZ|0 and PZ|1 as p(z) and q(z), respectively. Moreover, according to the law of total probability, the PDF of distribution PZ is PDF(PZ) = 0.5p(z) + 0.5q(z), then we have I(z;n) = 0.5 ∫ +∞ −∞ p(z) · log 2p(z) p(z) + q(z) dz + 0.5 ∫ +∞ −∞ q(z) · log 2q(z) p(z) + q(z) dz (13) Subsequently, we denote expectations and variances of PZ|0 and PZ|1 as (µ0, σ0) and (µ1, σ1), respectively. With the assumption that samples of each domain are symmetrically distributed about the centroid, we have pm = max{p(z)} = p(µ0) and qm = max{q(z)} = q(µ1). Thus if we use two variables f and g to denote PDF-s of PZ|0 and PZ|1, i.e., f = p(z) and g = q(z), in this case, we have f ∈ (0, pm] and g ∈ (0, qm]. Based on the above analysis and notations, we can split Eq. (13) into 4 terms as follows, I(z;n) = 0.5 ∫ pm 0 f · log 2f f + g df + 0.5 ∫ pm 0 f · log 2f f + g′ df + 0.5 ∫ qm 0 g · log 2g f + g dg + 0.5 ∫ qm 0 g · log 2g f ′ + g dg (14) here the superscript ′ indicates the right side of f and g. Next, let us consider the Gaussian Kernel estimator MMD(PZ|0,PZ|1; exp). The estimator consists of 3 terms, and we can easily conclude that e−∥z−z ′∥2 decreases with the increase of ∥z − z′∥2. Note that ‘MMD(PZ|0,PZ|1; exp) increases to saturation’ means at least one term increases while the other two terms remain unchanged or increased. For next proof, we need to mention Theorem 2 in (Sriperumbudur et al., 2009). Theorem 2 (Sriperumbudur et al., 2009). Suppose {(Xi, Yi)}Ni=1, Xi ∈ M,Yi ∈ {−1,+1}, ∀i is a training sample drawn i.i.d. from µ. Assuming the training sample is separable, let fsvm be the solution to the program, inf{∥f∥H : Yif(Xi) ≥ 1,∀i}, where H is an RKHS with measurable and bounded kernel k. If k is characteristic, then 1 ∥fsvm∥H ≤ γk(P,Q) 2 (15) where P := 1d ∑ Yi=+1 δXi , Q := 1d ∑ Yi=−1 δXi , d is the sample quantity and δ represents the Dirac measure. This theorem provides a bound on the margin of hard-margin SVM in terms of MMD. Eq. (15) shows that a smaller MMD between P and Q enforces a smaller margin (i.e., a less smooth classifier, fsvm, where smoothness is measured as ∥fsvm∥H). Besides, the Gaussian kernel is a measurable and bounded kernel function in an RKHS. According to this theorem and the nature of hard-margin SVM, we can easily obtained that variances σ0, σ1 of PZ|0 and PZ|1 are decreasing and the difference between expectations µ0 and µ1 is increasing with the saturated increase of MMD, and this conclusion can be found in (Jegelka et al., 2009). In the following, we will prove that I(z;n) will increase when the difference between µ0 and µ1 increases. Due to the symmetry of PDFs, both f and g increase at the left side of their own expectation and decrease at the right side. Without the loss of generality, we assume that µ0 locates at the left side of µ1. For the first term of Eq. (14) which corresponds to the left interval of f , the value of g is smaller than that of the case before increasing the difference between µ0 and µ1. Thus this term will increase when the difference between µ0 and µ1 increase. As for the right interval of f , the maximum value of f + g′ (in the neighborhood of µ1) comes later than that of case before increasing the difference, besides, the maximum value is also smaller. There, for the second term of Eq. (14), it will also increase with the increase of difference between µ0 and µ1. Similarly, the integration of g follows the same trend with that of f . In this case, I(z;n) will increase when the difference between µ0 and µ1 increases. Next, we will prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Without the loss of generality, we assume the variance of PZ|0 decreases while the variance of PZ|1 remain unchanged. For the PDF of a distribution, if the variance decreases, the maximum value of PDF will increase, and there will be two points of intersection with the same value between the PDF-s. Such conclusions are easily proved since the integration of a PDF is always 1. We denote the new maximum value of f as p′m, and the value of two points of intersection as p=. In addition, during the saturated increase of MMD, we can always find a pair of µ0 and µ1 that enables the left side of g to intersect with the right side of f . With the notations in Figure 5. We can change Eq. (14) into Terms of f = ∫ p= 0 f · log 2f f + g df + ∫ p′m p= f · log 2f f + g df + ∫ p′m pm f · log 2f f + g′ df + ∫ pm p= f · log 2f f + g′ df + ∫ p= pµ1 f · log 2f f + g′ df + ∫ pµ1 0 f · log 2f f + g′ df (16) Terms of g = ∫ q1= 0 g · log 2g f + g dg + ∫ q2= q1= g · log 2g f + g dg + ∫ qm q2= g · log 2g f + g dg + ∫ qm 0 g · log 2g f ′ + g dg (17) According to the decrease of σ0, we can conclude: the 1st, 4th, 6th terms of Eq. (16) and the 2nd term of Eq. (17) decrease, while the rest terms of Eq. (16) and Eq. (17) increase. For next proof, we denote a new function, R(f, g) = f · log 2ff+g , and we can get its first-order derivative is ∂R∂f = log 2f f+g + g f(f+g) . We can easily obtain that ∂R ∂f > 0 when f > g. According to these analysis, we can get that the decrease of the 1st term of Eq. (16) can be offset by the added increase of the 5th term of Eq. (16) and the 3rd term of Eq. (17); the decrease of the 4th term of Eq. (16) can be offset by the 2nd term of Eq. (16); the decrease of the 6th term of Eq. (16) can be offset by the 4th term of Eq. (17); the decrease of the 2nd term of Eq. (17) can be offset by the 3rd term of Eq. (16). Moreover, such offsets are overcompensation. In this case, we can prove that I(z;n) will increase if the variance of either PZ|0 or PZ|1 decreases. Considering the combination of the above two cases, if the difference between expectations increases and variances of two distributions decrease, the mutual information will increase. ■ A.2 OBSERVE THE MUTUAL INFORMATION We follow the similar process of (Achille & Soatto, 2018) to observe the change of mutual information I(z;n). To be specific, let Θ be a binary classifier that given representation z and nuisance n tries to predict whether z is sampled from the distribution of one domain PZ|0 or another domain PZ|1. According to (Sønderby et al., 2016), if we train Θ with the loss of Ez∼PZ|0(logΘ(z)) + Ez∼PZ|1(log 1−Θ(z)), there is always a Bayes-optimal Θ∗, Θ∗ = P(z|n = 0) P(z|n = 0) + P(z|n = 1) (18) With Eq.(12), if we assume the Θ0,Θ1 trained with Ez∼PZ|0(logΘ0(z))+Ez∼PZ|1(log 1−Θ0(z)) and Ez∼PZ|1(logΘ1(z)) + Ez∼PZ|0(log 1−Θ1(z)), respectively, are close to the optimal ones Θ∗0,Θ ∗ 1, we have I(z;n) = 0.5Ez∼P(z|n=0) log P(z|n = 0) P(z) + 0.5Ez∼P(z|n=1) log P(z|n = 1) P(z) = 0.5Ez∼P(z|n=0) log 2Θ0(z) + 0.5Ez∼P(z|n=1) log 2Θ1(z) (19) With this approximation, we train Θ0 and Θ1 for the model of every NTL training round, and we get the curve of I(z;n) shown in Figure 6 (MNIST). According to the figure, I(z;n) is increasing during the overall training process, which is consistent with our intention. B IMPLEMENTATION SETTINGS B.1 NETWORK ARCHITECTURE To build the classification models, we use several popular architectures as the bottom feature extractor and attach them with fully-connected layers as the top classifier, which are shown in Table 5. Specifically, the backbone network of digits is VGG-11, that of CIFAR10 & STL10 is VGG-13, and we use both ResNet-50 and VGG-19 for VisDA. The classifiers of all models are the same, i.e., 3 linear layers with ReLU and dropout. As for the GAN in the augmentation framework, the generator G is made up of 4 ConvTranspose blocks and 2 Residual blocks, and the discriminator D consists of a feature extractor with 4 convolution layers, a binary classifier and a multi-class classifier. These two classifiers are composed of sequential fully-connected layers and share the same representations extracted from the front extractor. The detailed architecture is shown in Table 6 and 7. B.2 HYPER PARAMETERS Scaling factors and upper bounds. As introduced in Section 3.1 of the main paper, there are two scaling factors (α, α′) that control the trade-off between the maximization of I(z;n) and the sufficiency property of the source domain. Here, we conduct experiments using different values (α = 0.01, 0.05, 0.10, 0.20, 0.50 and α′ = 0.01, 0.05, 0.10, 0.20, 0.50), and evaluate their impact to the performance of NTL. For Target-Specified NTL, we select the combination of MNIST→USPS, STL10→CIFAR10 and VisDA-T→VisDA-V. For Source-Only NTL, we choose MNIST→Non-S, STL10→Non-S and VisDA-T→Non-S as the representatives to carry out experiments. The results are presented in Tables 8 and 9. It is easy to conclude that NTL can work effectively with different scaling factors. As for the upper bounds (β, β′), we set them for the sake of preventing the auxiliary domain loss and the MMD distance from dominating the optimization objective, affecting the convergence of training. Training parameters. For the optimization of NTL, we utilize Adam as the optimizer, with learning γ = 0.0001 and batch size of 32. For all datasets, we randomly select 8,000 samples from their own training sets as the source data, and 1,000 samples from their own testing sets as the test data (if a dataset does not have test set, we select its test data from the training set without overlapping with the chosen 8,000 source samples). And the sample quantities of the source and auxiliary domain are always the same. In the training of adversarial augmentation, the optimizer is also Adam, and we set the learning rate to γ = 0.0002 with two decay momentums 0.5 and 0.999. The batch size is 64, and the dimension of the latent space fed to the generator is 256. B.3 TRIGGERING AND AUTHORIZATION PATCH As mentioned in Section 4.1 and 4.2 of our main paper, we attach a patch on the data to utilize NTL for ownership verification and usage authorization. We create the patch in a simple way. Specifically, for the pixel of i-th row and j-th column in an RGB image, if either i or j is even, then a value of v is added to the R channel of this pixel (the channel value cannot exceed 255). Intuitively, the patch is dependent on pixel values of each image. Thus the changes of feature space brought by the attachment of these patches for various images are not the same. In our experiments, if the content of the image is simple, e.g., MNIST, USPS and SVHN, the v with a small value can shift the feature space sufficiently, but for more complicated images, we have to increase v to enable source images attached with and without the patch differentiable. Specifically, we pick the value as follows: MNIST, USPS, SVHN (v = 20); MNIST-M, SYN-D, CIFAR10, STL10 (v = 80); VisDA (v = 100). As mentioned in the main paper, we will explore the unforgeability and uniqueness of patch generation in the further work. B.4 IMPLEMENTATION OF WATERMARK REMOVAL APPROACHES In the Section 4.1 of the main paper, we implement 6 model watermark removal approaches to verify the effectiveness of NTL-based ownership verification. Here, we introduce how to implement these approaches. FTAL (Adi et al., 2018) is an approach that fine-tunes the entire watermarked model using the original training data. To implement it, we use 30% of training set that has been learned by NTL to fine-tune the entire model. When using RTAL (Adi et al., 2018), the top classifier is randomly initialized before fine-tuning. In our experiments, we load the feature extractor of the model trained with NTL and randomly initialize a classifier to attach on the extractor, and then use 30% of the training set to fine-tune this combined model. As for EWC (Chen et al., 2019), we use the code of (Chen et al., 2019) to compute the fisher information of network parameters and adjust the learning rate of fine-tuning. The data used by EWC is also 30% of the training set. Finally, AU (Chen et al., 2019) utilizes the watermarked model to pseudo label additional unlabeled samples from other similar domains, and these samples will be used to fine-tune the model together with the original training set. Following this principle, we use 30% of the training set and the same quantity of unlabeled samples from other domains (the proportion ratio between these two parties is 1:1) to fine-tune the model trained with our NTL. We conduct all fine-tuning methods for 200 epochs. For the watermark overwriting, we overwrites a new backdoor-based watermark (Zhang et al., 2018) on the model trained with NTL. Specifically, we attach a white corner (3 × 3) as the backdoor trigger to 1/15 of the training set, and follow the training approach of (Zhang et al., 2018) to write the watermark on the model. In addition, similar to other watermarking works (Rouhani et al., 2018), we also test if NTL-based verification is resistant to model pruning, and apply a layer-wise pruning method (Han et al., 2015) to prune 70% parameters of the model trained with NTL. C ADDITIONAL EXPERIMENTAL RESULTS C.1 AUGMENTATION DATA OF OTHER DATASETS In the main paper, we present the augmentation data of MNIST, and in this section, we include the augmentation data of other datasets as follows: Figure 7 for USPS, Figure 8 for SVHN, Figure 9 for MNIST-M, Figure 10 for SYN-D, Figure 11 for CIFAR10, Figure 12, Figure 13 for VisDA-T. C.2 MODEL USAGE AUTHORIZATION ON CIFAR10 & STL10 AND VISDA Here we present the experiment of authorizing the model usage on CIFAR10 & STL10 and VisDA, shown in Table 10. According to the results, the model performs well on the data attached with the authorized patch and has bad performance on all other samples. C.3 ADDITIONAL RESULTS OF VISDA ON VGG-19 To demonstrate the effectiveness of NTL on different network architectures, we also carry out experiments of VisDA on VGG-19. All other settings are the same as before, and the results are shown in Table 11. We can easily see that the performance is consistent with the aforementioned other experiments, which shows the wide applicability of NTL. C.4 ERROR BAR We conduct all experiments with three random seeds (2021, 2022, 2023), and present the error range in this section. Table 12 is the error range of Target-Specified NTL corresponding to Table 1 of the main paper; Table 13 presents the error of experiments on Source-Only NTL corresponding to Table 3 of the main paper; Table 14 shows the error of model authorization which is presented as Table 4 in our main paper. C.5 THE IMPACT OF GAUSSIAN KERNEL BANDWIDTH In our implementation, we utilize a series of Gaussian kernels to approximate MMD, which is implemented as the MK-MMD in Long et al. (2015). Specifically, the bandwidth in our used kernels is controlled by two parameters mul and num (we use mul = 2.0, num = 5 in our experiments presented in the main text of the paper). The bandwidth of these kernels is as follows: B = { ∥x1 − x2∥2 · muli−⌊num/2⌋ (n2 − n) }num−1 i=0 (20) where x1 and x2 are two input data batches with size n, and ⌊·⌋ extracts the integer part of the input. To investigate the impact of kernel bandwidth, we select a series of mul-s and num-s to conduct Source-Only NTL experiments on MNIST, CIFAR10 and VisDA-T, and the results are shown in Table 15. According to the results, we can observe that the performance difference between the source and target is nearly the same with different mul-s and num-s. As these two parameters directly determine the kernel bandwidth, these results demonstrate that the kernel bandwidth does not have a significant impact on NTL performance. D POSSIBLE ATTACKS BASED ON NTL Although we propose the Non-Transferable Learning for protecting the Intellectual Property in AIaaS, if the model owner is malicious, they can also utilize NTL to poison or implant backdoor triggers to their model evasively and release the model to the public. In the setting of applying Target-Specified NTL to verify the model ownership, the patch we used can also be regarded as a trigger for certain misclassification backdoor. From the results of ownership verification in the main paper, we can see the possibility of launching NTL-based target backdoor attacks. As for the case of Source-Only NTL, our objective is shaped like an universal poison attack by restricting the generalization ability of models. The results in our main paper demonstrate the feasibility of this poison attack. In addition, recently, there are more domain adaptation (DA) works about adapting the domain-shared knowledge within the source model to the target one without the access to the source data (Liang et al., 2020; Ahmed et al., 2021; Kundu et al., 2020). However, if the source model is trained with Source-Only NTL, we believe that these DA works will be ineffective. In other words, our NTL can be regarded as a type of attack to these source-free DA works.
1. What is the focus and contribution of the paper on non-transferable learning for protecting intellectual property? 2. What are the strengths of the proposed approach, particularly in terms of its effectiveness in ownership verification and usage authorization? 3. What are the weaknesses of the paper, especially regarding the experiment section and the comparison with other works? 4. Do you have any concerns about the training complexity and computational time of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper Protecting the intellectual property of the trained models has received appealing attentions. Existing researches to protect intellectual property fall into two major categories: ownership verification and usage authorization. To this end, the authors propose to utilize non-transferable learning to achieve both the goal of ownership verification and usage authorization. Extensive experiments on several representative datasets validate the effectiveness of the proposed method in terms of ownership verification. Generally, this paper proposes a novel idea to address a practical problem in real-world applications, which could inspire many readers to follow it and have an important influence on the community of computer vision. I support the acceptance of this paper for a better ICLR conference. Review This paper could be significantly improved via addressing the following issues: In Table 1, what is the number of training epoches when transfering MT to MM? Did you try to increase the epochs of fine-tuning? If you train for enough epochs, the model would eventually reach the original accuracy. The sensitivity analysis regarding the epoches of your fine-tuning is necessary when compared to training from scratch and the transfer learning from the original model to the target task. The training complexity of using your NTL approach and the GAN training should be introduced in this paper? The computing time of the MMDs during each time step is at least twice your training time? The propsoed methodology is well presented. However, the differences between the proposed model and realted SOTA works should be presented clearly. Comparing Table 2 and Table 3, it can be seen that sometimes the source-only method shows greater performance compared to the target-specific method. The reasons why would this happen are interesting since providing the target-domain target should be more accurate when removing some part in the generalization space. However, the experiments seem does not agree with it. A future research section should be added in the revision.
ICLR
Title Emergent Communication in a Multi-Modal, Multi-Step Referential Game Abstract Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. N/A Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. 1 INTRODUCTION Recently, there has been a surge of work on neural network-based multi-agent systems that are capable of communicating with each other in order to solve a problem. Two distinct lines of research can be discerned. In the first one, communication is used as an essential tool for sharing information among multiple active agents in a reinforcement learning scenario (Sukhbaatar et al., 2016; Foerster et al., 2016; Mordatch & Abbeel, 2017; Andreas et al., 2017). Each of the active agents is, in addition to its traditional capability of interacting with the environment, able to communicate with other agents. A population of such agents is subsequently jointly tuned to reach a common goal. The main goal of this line of work is to use communication (which may be continuous) as a means to enhance learning in a difficult, sparse-reward environment. The communication may also mimic human conversation, e.g., in settings where agents engage in natural language dialogue based on a shared visual modality (Das et al., 2017; Strub et al., 2017). In contrast, the goal of our work is to learn the communication protocol, and aligns more closely with another line of research, which focuses on investigating and analyzing the emergence of communication in (cooperative) multi-agent referential games (Lewis, 2008; Skyrms, 2010; Steels & Loetzsch, 2012), where one agent (the sender) must communicate what it sees using some discrete emergent communication protocol, while the other agent (the receiver) is tasked with figuring out what the first agent saw. These lines of work are partially motivated by the idea that artificial communication (and other manifestations of machine intelligence) can emerge through interacting with the world and/or other agents, which could then converge towards human language (Gauthier & Mordatch, 2016; Mikolov et al., 2015; Lake et al., 2016; Kiela et al., 2016). (Lazaridou et al., 2016) have recently proposed a basic version of this game, where there is only a single transmission of a message from the sender to the receiver, as a test bed for both inducing and analyzing a communication protocol between two neural network-based agents. A related approach to using a referential game with two agents is proposed by (Andreas & Klein, 2016). (Jorge et al., 2016) have more recently introduced a game similar to the setting above, but with multiple transmissions of messages between the two agents. The sender is, however, strictly limited to sending single bit (yes/no) messages, and the number of exchanges is kept fixed. These earlier works lack two fundamental aspects of human communication in solving cooperative games. First, human information exchange is bidirectional with symmetric communication abilities, and spans exchanges of arbitrary length. In other words, linguistic interaction is not one-way, and can take as long or as short as it needs. Second, the information exchange emerges as a result of a disparity in knowledge or access to information, with the capability of bridging different modalities. For example, a human who has never seen a tiger but knows that it is a “big cat with stripes” would be able to identify one in a picture without effort. That is, humans can identify a previously unseen object from a textual description alone, while agents in previous interaction games have access to the same modality (a picture) and their shared communication protocol. Based on these considerations, we extend the basic referential game used in (Lazaridou et al., 2016; Andreas & Klein, 2016; Jorge et al., 2016) and (Havrylov & Titov, 2017) into a multi-modal, multi-step referential game. Firstly, our two agents, the sender and receiver, are grounded in different modalities: one has access only to the visual modality, while the other has access only to textual information (multi-modal). The sender sees an image and communicates it to the receiver whose job is to determine which object the sender refers to, while only having access to a set of textual descriptions. Secondly, communication is bidirectional and symmetrical, in that both the sender and receiver may send an arbitrary binary vector to each other. Furthermore, we allow the receiver to autonomously decide when to terminate a conversation, which leads to an adaptive-length conversation (multistep). The multi-modal nature of our proposal enforces symmetric, high-bandwidth communication, as it is not enough for the agents to simply exchange the carbon copies of their modalities (e.g. communicating the value of an arbitrary pixel in an image) in order to solve the problem. The multistep nature of our work allows us to train the agents to develop an efficient strategy of communication, implicitly encouraging a shorter conversation for simpler objects and a longer conversation for more complex objects. We evaluate and analyze the proposed multi-modal, multi-step referential game by creating a new dataset consisting of images of mammals and their textual descriptions. The task is somewhat related to recently proposed multi-modal dialogue games, such as that of (de Vries et al., 2016), but then played by agents using their own emergent communication. We build neural network-based sender and receiver, implementing techniques such as visual attention (Xu et al., 2015) and textual attention (Bahdanau et al., 2014). Each agent generates a multi-dimensional binary message at each time step, and the receiver decides whether to terminate the conversation. We train both agents jointly using policy gradient (Williams, 1992). 2 MULTI-MODAL, MULTI-STEP REFERENTIAL GAME Game The proposed multi-modal, multi-step referential game is characterized by a tuple G = 〈S,O,OS , OR, s∗〉. S is a set of all possible messages used for communication by both the sender and receiver. An analogy of S in natural languages would be a set of all possible sentences. Unlike (Jorge et al., 2016), we let S be shared between the two agents, which makes the proposed game a more realistic proxy to natural language conversations where two parties share a single vocabulary. In this paper, we define the set of symbols to be a set of d-dimensional binary vectors, reminiscent of the widely-used bag-of-words representation of a natural language sentence. That is, S = {0, 1}d. O is a set of objects. OS and OR are the sets of two separate views, or modes, of the objects in O, exposed to the sender and receiver, respectively. Due to the variability introduced by the choice of mode, the cardinalities of the latter two sets may differ, i.e., |OS | 6= |OR|, and it is usual for the cardinalities of both OS and OR to be greater than or equal to that of O, i.e., |OS | ≥ |O| and |OR| ≥ |O|. In this paper, for instance, O is a set of selected mammals, and OS and OR are, respectively, images and textual descriptions of those mammals: |OS | |OR| = |O|. The ground-truth map between OS and OR is given as s∗ : OS ×OR → {0, 1} . This function s∗ is used to determine whether elements os ∈ OS and or ∈ OR belong to the same object in O. It returns 1 when they do, and 0 otherwise. At the end of a conversation, the receiver selects an element from OR as an answer, and s∗ is used as a scorer of this particular conversation based on the sender’s object os and the receiver’s prediction ôr. Agents The proposed game is played between two agents, sender AS and receiver AR. A sender is a stochastic function that takes as input the sender’s view of an object os ∈ OS and the message mr ∈ S received from the receiver and outputs a binary message ms ∈ S. That is, AS : OS × S → S. We constrain the sender to be memory-less in order to ensure any message created by the sender is a response to an immediate message sent by the receiver. Unlike the sender, it is necessary for the receiver to possess a memory in order to reason through a series of message exchanges with the sender and make a final prediction. The receiver also has an option to determine whether to terminate the on-going conversation. We thus define the receiver as: AR : S × Rq → Ξ×OR × S × Rq, where Ξ = {0, 1} indicates whether to terminate the conversation. It receives the sender’s message ms ∈ S and its memory h ∈ Rq from the previous step, and stochastically outputs: (1) whether to terminate the conversation s ∈ {0, 1}, (2) its prediction ôr ∈ OR (if decided to terminate) and (3) a message mr ∈ S back to the sender (if decided not to terminate). Play Given G, one game instance is initiated by uniformly selecting an object o from the object set O. A corresponding view os ∈ OS is sampled and given to the sender AS . The whole set OR is provided to the receiver AR. The receiver’s memory and initial message are learned as separate parameters. 3 AGENTS At each time step t ∈ {1, . . . , Tmax}, the sender computes its message mts = AS(os,mt−1r ). This message is then transmitted to the receiver. The receiver updates its memory htr, decides whether to terminate the conversation st, makes its prediction otr, and creates a response: (s t, otr,m t r, h t r) = AR(m t s, h t−1 r ). If s t = 1, the conversation terminates, and the receiver’s prediction otr is used to score this game instance, i.e., s∗(os, otr). Otherwise, this process repeats in the next time step: t← t+ 1. Fig. 1 depicts a single sender-receiver exchange at time step t. Feedforward Sender Let os ∈ OS be a real-valued vector, and mr ∈ S be a d-dimensional binary message. We build a sender AS as a feedforward neural network that outputs a d-dimensional factorized Bernoulli distribution. It first computes the hidden state hs by hs = fs(os,mr), (1) and computes p(ms,j = 1) for all j = 1, . . . , d as p(ms,j = 1) = σ(w > s,jhs + bs,j), where σ is a sigmoid function, and ws,j ∈ Rdim(hs) and bs,j ∈ R are the weight vector and bias, respectively. During training, we sample a sender’s message from this distribution, while during test time we take the most likely message, i.e., ms,j = arg maxb∈{0,1} p(ms,j = b). Attention-based Sender When the view os of an object is given as a set of vectors {os1 , . . . , osn} rather than a single vector, we implement and test an attention mechanism from (Bahdanau et al., 2014; Xu et al., 2015). For each vector in the set, we first compute the attention weight against the received message mr as αj = exp(fs,att(osj ,mr))∑n j′=1 exp(fs,att(osj′ ,mr)) , and take the weighted-sum of the input vectors: õs = ∑n j=1 αjosj . This weighted sum is used instead of os as an input to fs in Eq. (1). Intuitively, this process of attention corresponds to selecting a subset of the sender’s view of an object according to a receiver’s query. Recurrent Receiver Let or ∈ OR be a real-valued vector, and ms ∈ S be a d-dimensional binary message received from the sender. A receiver AR is a recurrent neural network that first updates its memory by htr = fr(m t s, h t−1 r ) ∈ Rq, where fr is a recurrent activation function. We use a gated recurrent unit (GRU, Cho et al., 2014). The initial message from the receiver to the sender, m0r , is learned as a separate parameter. Given the updated memory vector htr, the receiver first computes whether to terminate the conversation. This is done by outputting a stop probability, as in p(st = 1) = σ(w>r,sh t r + br,s), where wr,s ∈ Rq and br,s ∈ R are the weight vector and bias, respectively. The receiver terminates the conversation (st = 1) by either sampling from (during training) or taking the most likely value (during test time) of this distribution. If st = 0, the receiver computes the message distribution similarly to the sender as a d-dimensional factorized Bernoulli distribution: p(mtr,j = 1) = σ(w > r,j tanh ( W>r h t r + U > r ( ∑ or∈OR p(or = 1)gr(or) ) + cr ) + br,j), where gr : Rdim(or) → Rq is a trainable function that embeds or into a q-dimensional real-valued vector space. The second term inside the tanh function ensures that the message generated by the receiver takes into consideration the receiver’s current belief p(or = 1) (see Eq. (2)) on which object the sender is viewing. If st = 1 (terminate), the receiver instead produces its prediction by computing the distribution over all the elements in OR: p(or = 1) = exp(gr(or) >htr)∑ o′r∈OR exp(gr(o′r) >htr) . (2) Again, gr(or) is the embedding of an object o based on the receiver’s view or, similarly to what was proposed by (Larochelle et al., 2008). The receiver’s prediction is given by ôr = arg maxor∈OR p(or = 1), and the entire prediction distribution is used to compute the cross-entropy loss. Attention-based Receiver Similarly to the sender, we can incorporate the attention mechanism in the receiver. This is done at the level of the embedding function gr by modifying it to take as input both the set of vectors or = {or,1, . . . , or,n} and the current memory vector htr. Attention weights over the view vectors are computed against the memory vector, and their weighted sum õr, or its affine transformation to Rq , is returned. 4 TRAINING Both the sender and receiver are jointly trained in order to maximize the score s∗(os, ôr). Our per-instance loss function Li is the sum of the classification loss Lic and the reinforcement learning loss Lir. The classification loss is a usual cross-entropy loss defined as Lic = log p(o ∗ r = 1), where o∗r ∈ OR is the view of the correct object. The reinforcement learning loss is defined as Lir = T∑ t=1 (R−Bs(os,mt−1r )) d∑ j=1 log p(mts,j)︸ ︷︷ ︸ sender + (R−Br(mtr, ht−1r ))(log p(st) + d∑ j=1 log p(mtr,j))︸ ︷︷ ︸ receiver , where R is a reward given by the ground-truth mapping s∗. This reinforcement learning loss corresponds to REINFORCE (Williams, 1992). Bs and Br are baseline estimators for the sender and receiver, respectively, and both of them are trained to predict the final reward R, as suggested by (Mnih & Gregor, 2014): LiB = T∑ t=1 (R−Bs(os,mt−1r ))2 + (R−Br(mts, ht−1r ))2. In order to facilitate the exploration by the sender and receiver during training, we regularize the negative entropies of the sender’s and receiver’s message distributions. We also minimize the negative entropy of the receiver’s termination distribution to encourage the conversation to be of length 1− ( 12 ) Tmax on average. The final per-instance loss can then be written as Li = Lic + L i r − T∑ t=1 ( λsH(s t) + λm d∑ j=1 (H(mts,j) +H(m t r,j)) ) , where H is the entropy, and λs ≥ 0 and λm ≥ 0 are regularization coefficients. We minimize this loss by computing its gradient with respect to the parameters of both the sender and receiver and taking a step toward the opposite direction. We list all the mathematical symbols used in the description of the game in Appendix A. 5 EXPERIMENTAL SETTINGS 5.1 DATA COLLECTION AND PREPROCESSING We collect a new dataset consisting of images and textual descriptions of mammals. We crawl the nodes in the subtree of the “mammal” synset in WordNet (Miller, 1995). For each node, we collect the word o and the corresponding textual description or in order to construct the object set O and the receiver’s view set OR. For each word o, we query Flickr to retrieve as many as 650 images 1. These images form the sender’s view set OS . We sample 70 mammals from the subtree and build three sets from the collected data. First, we keep a subset of sixty mammals for training (550 images per mammal) and set aside data for validation (50 images per mammal) and test (20 images per mammal). This constitutes the in-domain test, that measures how well the model does on mammals that it is familiar with. We use the remaining ten mammals to build an out-of-domain test set (100 images per mammal), which allows us to test the generalization ability of the sender and receiver to unseen objects, and thereby to determine whether the receiver indeed relies on the availability of a different mode from the sender. In addition to the mammals, we build a third test set consisting of 10 different types of insects, rather than mammals. To construct this transfer test, we uniformly select 100 images per insect at random from the ImageNet dataset (Deng et al., 2009), while the descriptions are collected from WordNet, similarly to the mammals. The test is meant to measure an extreme case of zero-shot generalization, to an entirely different category of objects (i.e., insects rather than mammals, and images from ImageNet rather than from Flickr). Image Processing Instead of a raw image, we use features extracted by ResNet-34 (He et al., 2016). With the attention-based sender, we use 64 (8× 8) 512-dimensional feature vectors from the final convolutional layer. Otherwise, we use the 512-dimensional feature vector after average pooling those 64 vectors. We do not fine-tune the network. 1We query Flickr, obtaining more than 650 images per word, then we remove duplicates and use a heuristic to discard undesirables images. Duplicates are detected using dHash (Tantos, 2017). As a heuristic, we take an image classifier that was trained on ImageNet (Krizhevsky et al., 2012), classify each candidate image, and discard an image if its most likely class is not an animal. We randomly select from the remaining images to acquire the desired amount. Text Processing Each description is lowercased. Stopwords are filtered using the Stopwords Corpus included in NLTK (Bird et al., 2009). We treat each description as a bag of unique words by removing any duplicates. The average description length is 9.1 words with a standard deviation of 3.16. Because our dataset is relatively small, especially in the textual mode, we use pretrained 100-dimensional GloVe word embeddings (Pennington et al., 2014). With the attention-based receiver, we consider a set of such GloVe vectors as or, and otherwise, the average of those vectors is used as the representation of a description. 5.2 MODELS AND TRAINING Feedforward Sender When attention is not used, the sender is configured to have a single hidden layer with 256 tanh units. The input os is constructed by concatenating the image vector, the receiver’s message vector, their point-wise difference and point-wise product, after embedding the image and message vectors into the same space by a linear transformation. The attention-based sender uses a single-layer feedforward network with 256 tanh units to compute the attention weights. Recurrent Receiver The receiver is a single hidden-layer recurrent neural network with 64 gated recurrent units. When the receiver is configured to use attention over the words in each description, we use a feedforward network with a single hidden layer of 64 rectified linear units. Baseline Networks The baseline networks Bs and Br are both feedforward networks with a single hidden layer of 500 rectified linear units each. The receiver’s baseline network takes as input the recurrent hidden state ht−1r but does not backpropagate the error gradient through the receiver. Training and Evaluation We train both the sender and receiver as well as associated baseline networks using RMSProp (Tieleman & Hinton, 2012) with learning rate set to 10−4 and minibatches of size 64 each. The coefficients for the entropy regularization, λs and λm, are set to 0.08 and 0.01 respectively, based on the development set performance from the preliminary experiments. Each training run is early-stopped based on the development set accuracy for a maximum of 500 epochs. We evaluate each model on a test set by computing the accuracy@K, where K is set to be 10% of the number of categories in each of the three test sets (K is either 6 or 7, since we always include the classes from training). We use this metric to enable comparison between the different test sets and to avoid overpenalizing predicting similar classes, e.g. kangaroo and wallaby. We set the maximum length of a conversation to be 10, i.e., Tmax = 10. We train on a single GPU (Nvidia Titan X Pascal), and a single experiment takes roughly 8 hours for 500 epochs. Code We used PyTorch [http://pytorch.org]. Our implementation of the agents and instructions on how to build the dataset are available on Github [https://github.com/nyu-dl/MultimodalGame]. 6 RESULTS AND ANALYSIS The model and approach in this paper are differentiated from previous work mainly by: 1) the variable conversation length, 2) the multi-modal nature of the game and 3) the particular nature of the communication protocol, i.e., the messages. In this section, we experimentally examine our setup and specifically test the following hypotheses: • The more difficult or complex the referential game, the more dialogue turns would be needed if humans were to play it. Similarly, we expect the receiver to need more information, and ask more questions, if the problem is more difficult. Hence, we examine the relationship between conversation length and accuracy/difficulty. • As the agents take turns in a continuing conversation, more information becomes available, which implies that the receiver should become more sure about its prediction, even if the problem is difficult to begin with. Thus, we separately examine the confidence of predictions as the conversation progresses. • The agents play very different roles in the game. On the one hand, we would hypothesize the receiver’s messages to become more and more specific. For example, if the receiver has already established that the picture is of a feline, it does not make sense to ask, e.g., whether the animal has tusks or fins. This implies that the entropy of its messages should decrease. On the other hand, as questions become more specific, they are also likely to become more difficult for the sender to answer with high confidence. Answering that something is an aquatic mammal is easier than describing, e.g., the particular shape of a fin. Consequently, the entropy of the sender’s messages is likely to increase as it grows less confident in its answers. To examine this, we analyze the information theoretic content of the messages sent by both agents. In what follows, we discuss experiments along the lines of these hypotheses. In addition, we analyze the impact of changing the message dimensionality, and the effect of applying visual and linguistic attention mechanisms. Conversation length and accuracy/difficulty We train a pair of agents with an adaptive conversation length in which the receiver may terminate the conversation early based on the stop probability. Once training is done, we inspect the relationship between average conversation length and difficulty across classes, as well as the accuracy per the conversation length by partitioning the test examples into length-based bins. We expect that more difficult classes require a higher average length of exchange. To test this hypothesis, we use the accuracy of a separate classifier as a proxy for the difficulty of a sample. Specifically, we train a classifier based on a pre-trained ResNet-50, in which we freeze all but the last layer, and obtain the F1 score per class evaluated on the in-domain test set. The Pearson correlation between the F1 score and average conversation length across classes is −0.81 with a p-value of 4× 10−15 implying a statistically significant negative relationship, as displayed in Fig. 2 (a). In addition, we present the accuracies against the conversation lengths (as automatically determined by the receiver) in Fig. 2 (b). We notice a clear trend with the in-domain test set: examples for which the conversations are shorter are better classified, which might indicate that they are easier. It is important to remember that the receiver’s stop probability is not artificially tied to the performance nor confidence of the receiver’s prediction, but is simply learned by playing the proposed game. A similar trend can be observed with the out-of-domain test set, however, to a lesser degree. A similar trend of having longer conversation for more difficult objects is also found with humans in the game of 20 questions (Cohen & Lake, 2016).2 2 Accuracy scores in relation to the number of questions were obtained via personal communication. Conversation length and confidence With the agents trained with an adaptive conversation length, we can investigate how the prediction uncertainty of the receiver evolves over time. We plot the evolution of the entropy of the prediction distribution in Fig. 3 (a) averaged per conversation length bucket. We first notice that the conversation length, determined by the receiver on its own, correlates well with the prediction confidence (measured as negative entropy) of the receiver. Also, it is clear on the in-domain test set that the entropy almost monotonically decreases over the conversation, and the receiver terminates the conversation when the predictive entropy converges. This trend is however not apparent with the out-of-domain test set, which we attribute to the difficulty of zero-shot generalization. The goal of the conversation, i.e., the series of message exchanges, is to distinguish among many different objects. The initial message from the sender could for example give a rough idea of the high-level category that an object belongs to, after which the goal becomes to distinguish different objects within that high-level category. In other words, objects in a single such cluster, which are visually similar due to the sender’s access to the visual mode of an object, are predicted at different time steps in the conversation. We qualitatively examine this hypothesis by visualizing how the predictive probabilities of the receiver evolve over a conversation. In Fig. 3 (b,c), we show two example categories – kangaroo and wolf. As the conversation progress and more information is gathered for the receiver, similar but incorrect categories receive smaller probabilities than the correct one. We notice a similar trend with all other categories. Information theoretic message content In the previous section, we examined how prediction certainty evolved over time. We can do the same with the messages sent by the respective agents. In Fig. 4, we plot the entropies of the message distributions by the sender and receiver. We notice that, as the conversation progresses, the entropy decreases for the receiver, while it increases for the sender. This observation can be explained by the following conjecture. As the receiver accumulates information transmitted by the sender, the set of possible queries to send back to the sender shrinks, and consequently the entropy decreases. It could be said that the questions become more specific as more information becomes available to the receiver as it zones in on the correct answer. On the other hand, as the receiver’s message becomes more specific and difficult to answer, the certainty of the sender in providing the correct answer decreases, thereby increasing the entropy of the sender’s message distribution. We notice a similar trend on the out-of-domain test set as well. Effect of the message dimensionality Next, we vary the dimensionality d of each message to investigate the impact of the constraint on the communication channel, while keeping the conversation length adaptive. We generally expect a better accuracy with a higher bandwidth. More specifically, we expect the generalization to unseen categories (out-of-domain test) would improve as the information bandwidth of the communication channel increases. When the bandwidth is limited, the agents will be forced to create a communication protocol highly specialized for categories seen during training. On the other hand, the agents will learn to decompose structures underlying visual and textual modes of an object into more generalizable descriptions with a higher bandwidth channel. The accuracies reported in Fig. 5 agree well with this hypothesis. On the in-domain test set, we do not see significant improvement nor degradation as the message dimensionality changes. We observe, however, a strong correlation between the message dimensionality and the accuracy on the out-of-domain test set. With 32-dimensional messages, the agents were able to achieve up to 45% accuracy@7 on the out-of-domain test set which consists of 10 mammals not seen during training. The effect of modifying the message dimension was less clear when measured against the transfer set. Effect of Attention Mechanism All the experiments so far have been run without attention mechanism. We train additional three pairs of agents with 32-dimensional message vectors; (1) attentionbased sender, (2) attention-based receiver, and (3) attention-based sender and attention-based receiver. On the in-domain test set, we are not able to observe any improvement from the attention mechanism on either of the agents. We did however notice that the attention mechanism (attention-based receiver) significantly improves the accuracy on the transfer test set from 16.9% up to 27.4%. We conjecture that this is due to the fact that attention allows the agents to focus on the aspects of the objects (e.g. certain words in descriptions; or regions in images) that they are familiar with, which means that they are less susceptible to the noise introduced from being exposed to an entirely new category. We leave further analysis of the effect of the attention mechanism for future work. Is communication necessary? One important consideration is whether the trained agents utilize the adaptability of the communication protocol. It is indeed possible that the sender does not learn to shape communication and simply relies on the random communication protocol decided by the random initialization of its parameters. In this case, the receiver will need to recover information from the sender sent via this random communication channel. In order to verify this is not the case, we train a pair of agents without updating the parameters of the sender. As the receiver is still updated, and the sender’s information still flows toward the receiver, learning happens. We, however, observe that the overall performance significantly lags behind the case when agents are trained together, as shown in Fig. 6. This suggests that the agents must learn a new, task-specific communication protocol, which emerges in order to solve the problem successfully.3 7 CONCLUSION In this paper, we have proposed a novel, multi-modal, multi-step referential game for building and analyzing communication-based neural agents. The design of the game enables more human-like communication between two agents, by allowing a variable-length conversation with a symmetric communication. The conducted experiments and analyses reveal three interesting properties of the communication protocol, or artificial language, that emerges from learning to play the proposed game. First, the sender and receiver are able to adjust the length of the conversation based on the difficulty of predicting the correct object. The length of the conversation is found to (negatively) correlate with the confidence of the receiver in making predictions. Second, the receiver gradually asks more specific questions as the conversation progresses. This results in an increase of entropy in the sender’s message distribution, as there are more ways to answer those highly specific questions. We further observe that increasing the bandwidth of communication, measured in terms of the message dimensionality, allows for improved zero-shot generalization. Most importantly, we present a suite of hypotheses and associated experiments for investigating an emergent communication protocol, which we believe will be useful for the future research on emergent communication. Future Direction Despite the significant extension we have made to the basic referential game, the proposed multi-modal, multi-step game also exhibits a number of limitations. First, an emergent 3There are additional statistics about the stability of training in Appendix B. communication from this game is not entirely symmetric as there is no constraint that prevents the two agents from partitioning the message space. This could be addressed by having more than two agents interacting with each other while exchanging their roles, which we leave as future work. Second, the message set S consists of fixed-dimensional binary vectors. This choice effectively prevents other linguistic structures, such as syntax. Third, the proposed game, as well as any existing referential game, does not require any action, other than speaking. This is in contrast to the first line of research discussed earlier in Sec. 1, where communication happens among active agents. We anticipate a future research direction in which both of these approaches are combined. ACKNOWLEDGMENTS We thank Brenden Lake and Alex Cohen for valuable discussion. We also thank Maximilian Nickel, Y-Lan Boureau, Jason Weston, Dhruv Batra, and Devi Parikh for helpful suggestions. KC thanks for support by AdeptMind, Tencent, eBay, NVIDIA, and CIFAR. AD thanks the NVIDIA Corporation for their donation of a Titan X Pascal. This work is done by KE as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University. A part of Fig. 1 is licensed from EmmyMik/CC BY 2.0/https://www.flickr.com/photos/emmymik/8206632393/. A TABLE OF NOTATIONS B STABILITY OF TRAINING We ran our standard setup4 six times using different random seeds. For each experiment, we trained the model until convergence using early stopping against the validation data, then measured the loss and accuracy on the in-domain test set. The accuracy@6 had mean of 96.6% with variance of 1.98e−1, the accuracy@1 had mean of 86.0% with variance 7.59e−1, and the loss had mean of 0.611 with variance 2.72e−3. These results suggest that the model is not only effective at classifying images, but also robust to random restart. 4The standard setup uses adaptive conversation lengths with a maximum length of 10 and message dimension of 32. The values of other hyperparameters are described in Section 5.2.
1. What is the unique aspect of the paper's approach to learning representations? 2. What is the experimental setup used in the paper, and how does it allow for comparing different approaches? 3. Are there any concerns regarding the reproducibility of the results due to the unavailability of the dataset or agents? 4. How clear are the presentations of the results, and what specific information is missing? 5. Where can we find the detailed description of the training procedure?
Review
Review The setup in the paper for learning representations is different to many other approaches in the area, using to agents that communicate over descriptions of objects using different modalities. The experimental setup is interesting in that it allows comparing approaches in learning an effective representation. The paper does mention the agents will be available, but leaves open wether the dataset will be also available. For reproducibility and comparisons, this availability would be essential. I like that the paper gives a bit of context, but presentation of results could be clearer, and I am missing some more explicit information on training and results (eg how long / how many training examples, how many testing, classification rates, etc). The paper says is the training procedure is described in Appendix A, but as far as I see that contains the table of notations.
ICLR
Title Emergent Communication in a Multi-Modal, Multi-Step Referential Game Abstract Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. N/A Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. 1 INTRODUCTION Recently, there has been a surge of work on neural network-based multi-agent systems that are capable of communicating with each other in order to solve a problem. Two distinct lines of research can be discerned. In the first one, communication is used as an essential tool for sharing information among multiple active agents in a reinforcement learning scenario (Sukhbaatar et al., 2016; Foerster et al., 2016; Mordatch & Abbeel, 2017; Andreas et al., 2017). Each of the active agents is, in addition to its traditional capability of interacting with the environment, able to communicate with other agents. A population of such agents is subsequently jointly tuned to reach a common goal. The main goal of this line of work is to use communication (which may be continuous) as a means to enhance learning in a difficult, sparse-reward environment. The communication may also mimic human conversation, e.g., in settings where agents engage in natural language dialogue based on a shared visual modality (Das et al., 2017; Strub et al., 2017). In contrast, the goal of our work is to learn the communication protocol, and aligns more closely with another line of research, which focuses on investigating and analyzing the emergence of communication in (cooperative) multi-agent referential games (Lewis, 2008; Skyrms, 2010; Steels & Loetzsch, 2012), where one agent (the sender) must communicate what it sees using some discrete emergent communication protocol, while the other agent (the receiver) is tasked with figuring out what the first agent saw. These lines of work are partially motivated by the idea that artificial communication (and other manifestations of machine intelligence) can emerge through interacting with the world and/or other agents, which could then converge towards human language (Gauthier & Mordatch, 2016; Mikolov et al., 2015; Lake et al., 2016; Kiela et al., 2016). (Lazaridou et al., 2016) have recently proposed a basic version of this game, where there is only a single transmission of a message from the sender to the receiver, as a test bed for both inducing and analyzing a communication protocol between two neural network-based agents. A related approach to using a referential game with two agents is proposed by (Andreas & Klein, 2016). (Jorge et al., 2016) have more recently introduced a game similar to the setting above, but with multiple transmissions of messages between the two agents. The sender is, however, strictly limited to sending single bit (yes/no) messages, and the number of exchanges is kept fixed. These earlier works lack two fundamental aspects of human communication in solving cooperative games. First, human information exchange is bidirectional with symmetric communication abilities, and spans exchanges of arbitrary length. In other words, linguistic interaction is not one-way, and can take as long or as short as it needs. Second, the information exchange emerges as a result of a disparity in knowledge or access to information, with the capability of bridging different modalities. For example, a human who has never seen a tiger but knows that it is a “big cat with stripes” would be able to identify one in a picture without effort. That is, humans can identify a previously unseen object from a textual description alone, while agents in previous interaction games have access to the same modality (a picture) and their shared communication protocol. Based on these considerations, we extend the basic referential game used in (Lazaridou et al., 2016; Andreas & Klein, 2016; Jorge et al., 2016) and (Havrylov & Titov, 2017) into a multi-modal, multi-step referential game. Firstly, our two agents, the sender and receiver, are grounded in different modalities: one has access only to the visual modality, while the other has access only to textual information (multi-modal). The sender sees an image and communicates it to the receiver whose job is to determine which object the sender refers to, while only having access to a set of textual descriptions. Secondly, communication is bidirectional and symmetrical, in that both the sender and receiver may send an arbitrary binary vector to each other. Furthermore, we allow the receiver to autonomously decide when to terminate a conversation, which leads to an adaptive-length conversation (multistep). The multi-modal nature of our proposal enforces symmetric, high-bandwidth communication, as it is not enough for the agents to simply exchange the carbon copies of their modalities (e.g. communicating the value of an arbitrary pixel in an image) in order to solve the problem. The multistep nature of our work allows us to train the agents to develop an efficient strategy of communication, implicitly encouraging a shorter conversation for simpler objects and a longer conversation for more complex objects. We evaluate and analyze the proposed multi-modal, multi-step referential game by creating a new dataset consisting of images of mammals and their textual descriptions. The task is somewhat related to recently proposed multi-modal dialogue games, such as that of (de Vries et al., 2016), but then played by agents using their own emergent communication. We build neural network-based sender and receiver, implementing techniques such as visual attention (Xu et al., 2015) and textual attention (Bahdanau et al., 2014). Each agent generates a multi-dimensional binary message at each time step, and the receiver decides whether to terminate the conversation. We train both agents jointly using policy gradient (Williams, 1992). 2 MULTI-MODAL, MULTI-STEP REFERENTIAL GAME Game The proposed multi-modal, multi-step referential game is characterized by a tuple G = 〈S,O,OS , OR, s∗〉. S is a set of all possible messages used for communication by both the sender and receiver. An analogy of S in natural languages would be a set of all possible sentences. Unlike (Jorge et al., 2016), we let S be shared between the two agents, which makes the proposed game a more realistic proxy to natural language conversations where two parties share a single vocabulary. In this paper, we define the set of symbols to be a set of d-dimensional binary vectors, reminiscent of the widely-used bag-of-words representation of a natural language sentence. That is, S = {0, 1}d. O is a set of objects. OS and OR are the sets of two separate views, or modes, of the objects in O, exposed to the sender and receiver, respectively. Due to the variability introduced by the choice of mode, the cardinalities of the latter two sets may differ, i.e., |OS | 6= |OR|, and it is usual for the cardinalities of both OS and OR to be greater than or equal to that of O, i.e., |OS | ≥ |O| and |OR| ≥ |O|. In this paper, for instance, O is a set of selected mammals, and OS and OR are, respectively, images and textual descriptions of those mammals: |OS | |OR| = |O|. The ground-truth map between OS and OR is given as s∗ : OS ×OR → {0, 1} . This function s∗ is used to determine whether elements os ∈ OS and or ∈ OR belong to the same object in O. It returns 1 when they do, and 0 otherwise. At the end of a conversation, the receiver selects an element from OR as an answer, and s∗ is used as a scorer of this particular conversation based on the sender’s object os and the receiver’s prediction ôr. Agents The proposed game is played between two agents, sender AS and receiver AR. A sender is a stochastic function that takes as input the sender’s view of an object os ∈ OS and the message mr ∈ S received from the receiver and outputs a binary message ms ∈ S. That is, AS : OS × S → S. We constrain the sender to be memory-less in order to ensure any message created by the sender is a response to an immediate message sent by the receiver. Unlike the sender, it is necessary for the receiver to possess a memory in order to reason through a series of message exchanges with the sender and make a final prediction. The receiver also has an option to determine whether to terminate the on-going conversation. We thus define the receiver as: AR : S × Rq → Ξ×OR × S × Rq, where Ξ = {0, 1} indicates whether to terminate the conversation. It receives the sender’s message ms ∈ S and its memory h ∈ Rq from the previous step, and stochastically outputs: (1) whether to terminate the conversation s ∈ {0, 1}, (2) its prediction ôr ∈ OR (if decided to terminate) and (3) a message mr ∈ S back to the sender (if decided not to terminate). Play Given G, one game instance is initiated by uniformly selecting an object o from the object set O. A corresponding view os ∈ OS is sampled and given to the sender AS . The whole set OR is provided to the receiver AR. The receiver’s memory and initial message are learned as separate parameters. 3 AGENTS At each time step t ∈ {1, . . . , Tmax}, the sender computes its message mts = AS(os,mt−1r ). This message is then transmitted to the receiver. The receiver updates its memory htr, decides whether to terminate the conversation st, makes its prediction otr, and creates a response: (s t, otr,m t r, h t r) = AR(m t s, h t−1 r ). If s t = 1, the conversation terminates, and the receiver’s prediction otr is used to score this game instance, i.e., s∗(os, otr). Otherwise, this process repeats in the next time step: t← t+ 1. Fig. 1 depicts a single sender-receiver exchange at time step t. Feedforward Sender Let os ∈ OS be a real-valued vector, and mr ∈ S be a d-dimensional binary message. We build a sender AS as a feedforward neural network that outputs a d-dimensional factorized Bernoulli distribution. It first computes the hidden state hs by hs = fs(os,mr), (1) and computes p(ms,j = 1) for all j = 1, . . . , d as p(ms,j = 1) = σ(w > s,jhs + bs,j), where σ is a sigmoid function, and ws,j ∈ Rdim(hs) and bs,j ∈ R are the weight vector and bias, respectively. During training, we sample a sender’s message from this distribution, while during test time we take the most likely message, i.e., ms,j = arg maxb∈{0,1} p(ms,j = b). Attention-based Sender When the view os of an object is given as a set of vectors {os1 , . . . , osn} rather than a single vector, we implement and test an attention mechanism from (Bahdanau et al., 2014; Xu et al., 2015). For each vector in the set, we first compute the attention weight against the received message mr as αj = exp(fs,att(osj ,mr))∑n j′=1 exp(fs,att(osj′ ,mr)) , and take the weighted-sum of the input vectors: õs = ∑n j=1 αjosj . This weighted sum is used instead of os as an input to fs in Eq. (1). Intuitively, this process of attention corresponds to selecting a subset of the sender’s view of an object according to a receiver’s query. Recurrent Receiver Let or ∈ OR be a real-valued vector, and ms ∈ S be a d-dimensional binary message received from the sender. A receiver AR is a recurrent neural network that first updates its memory by htr = fr(m t s, h t−1 r ) ∈ Rq, where fr is a recurrent activation function. We use a gated recurrent unit (GRU, Cho et al., 2014). The initial message from the receiver to the sender, m0r , is learned as a separate parameter. Given the updated memory vector htr, the receiver first computes whether to terminate the conversation. This is done by outputting a stop probability, as in p(st = 1) = σ(w>r,sh t r + br,s), where wr,s ∈ Rq and br,s ∈ R are the weight vector and bias, respectively. The receiver terminates the conversation (st = 1) by either sampling from (during training) or taking the most likely value (during test time) of this distribution. If st = 0, the receiver computes the message distribution similarly to the sender as a d-dimensional factorized Bernoulli distribution: p(mtr,j = 1) = σ(w > r,j tanh ( W>r h t r + U > r ( ∑ or∈OR p(or = 1)gr(or) ) + cr ) + br,j), where gr : Rdim(or) → Rq is a trainable function that embeds or into a q-dimensional real-valued vector space. The second term inside the tanh function ensures that the message generated by the receiver takes into consideration the receiver’s current belief p(or = 1) (see Eq. (2)) on which object the sender is viewing. If st = 1 (terminate), the receiver instead produces its prediction by computing the distribution over all the elements in OR: p(or = 1) = exp(gr(or) >htr)∑ o′r∈OR exp(gr(o′r) >htr) . (2) Again, gr(or) is the embedding of an object o based on the receiver’s view or, similarly to what was proposed by (Larochelle et al., 2008). The receiver’s prediction is given by ôr = arg maxor∈OR p(or = 1), and the entire prediction distribution is used to compute the cross-entropy loss. Attention-based Receiver Similarly to the sender, we can incorporate the attention mechanism in the receiver. This is done at the level of the embedding function gr by modifying it to take as input both the set of vectors or = {or,1, . . . , or,n} and the current memory vector htr. Attention weights over the view vectors are computed against the memory vector, and their weighted sum õr, or its affine transformation to Rq , is returned. 4 TRAINING Both the sender and receiver are jointly trained in order to maximize the score s∗(os, ôr). Our per-instance loss function Li is the sum of the classification loss Lic and the reinforcement learning loss Lir. The classification loss is a usual cross-entropy loss defined as Lic = log p(o ∗ r = 1), where o∗r ∈ OR is the view of the correct object. The reinforcement learning loss is defined as Lir = T∑ t=1 (R−Bs(os,mt−1r )) d∑ j=1 log p(mts,j)︸ ︷︷ ︸ sender + (R−Br(mtr, ht−1r ))(log p(st) + d∑ j=1 log p(mtr,j))︸ ︷︷ ︸ receiver , where R is a reward given by the ground-truth mapping s∗. This reinforcement learning loss corresponds to REINFORCE (Williams, 1992). Bs and Br are baseline estimators for the sender and receiver, respectively, and both of them are trained to predict the final reward R, as suggested by (Mnih & Gregor, 2014): LiB = T∑ t=1 (R−Bs(os,mt−1r ))2 + (R−Br(mts, ht−1r ))2. In order to facilitate the exploration by the sender and receiver during training, we regularize the negative entropies of the sender’s and receiver’s message distributions. We also minimize the negative entropy of the receiver’s termination distribution to encourage the conversation to be of length 1− ( 12 ) Tmax on average. The final per-instance loss can then be written as Li = Lic + L i r − T∑ t=1 ( λsH(s t) + λm d∑ j=1 (H(mts,j) +H(m t r,j)) ) , where H is the entropy, and λs ≥ 0 and λm ≥ 0 are regularization coefficients. We minimize this loss by computing its gradient with respect to the parameters of both the sender and receiver and taking a step toward the opposite direction. We list all the mathematical symbols used in the description of the game in Appendix A. 5 EXPERIMENTAL SETTINGS 5.1 DATA COLLECTION AND PREPROCESSING We collect a new dataset consisting of images and textual descriptions of mammals. We crawl the nodes in the subtree of the “mammal” synset in WordNet (Miller, 1995). For each node, we collect the word o and the corresponding textual description or in order to construct the object set O and the receiver’s view set OR. For each word o, we query Flickr to retrieve as many as 650 images 1. These images form the sender’s view set OS . We sample 70 mammals from the subtree and build three sets from the collected data. First, we keep a subset of sixty mammals for training (550 images per mammal) and set aside data for validation (50 images per mammal) and test (20 images per mammal). This constitutes the in-domain test, that measures how well the model does on mammals that it is familiar with. We use the remaining ten mammals to build an out-of-domain test set (100 images per mammal), which allows us to test the generalization ability of the sender and receiver to unseen objects, and thereby to determine whether the receiver indeed relies on the availability of a different mode from the sender. In addition to the mammals, we build a third test set consisting of 10 different types of insects, rather than mammals. To construct this transfer test, we uniformly select 100 images per insect at random from the ImageNet dataset (Deng et al., 2009), while the descriptions are collected from WordNet, similarly to the mammals. The test is meant to measure an extreme case of zero-shot generalization, to an entirely different category of objects (i.e., insects rather than mammals, and images from ImageNet rather than from Flickr). Image Processing Instead of a raw image, we use features extracted by ResNet-34 (He et al., 2016). With the attention-based sender, we use 64 (8× 8) 512-dimensional feature vectors from the final convolutional layer. Otherwise, we use the 512-dimensional feature vector after average pooling those 64 vectors. We do not fine-tune the network. 1We query Flickr, obtaining more than 650 images per word, then we remove duplicates and use a heuristic to discard undesirables images. Duplicates are detected using dHash (Tantos, 2017). As a heuristic, we take an image classifier that was trained on ImageNet (Krizhevsky et al., 2012), classify each candidate image, and discard an image if its most likely class is not an animal. We randomly select from the remaining images to acquire the desired amount. Text Processing Each description is lowercased. Stopwords are filtered using the Stopwords Corpus included in NLTK (Bird et al., 2009). We treat each description as a bag of unique words by removing any duplicates. The average description length is 9.1 words with a standard deviation of 3.16. Because our dataset is relatively small, especially in the textual mode, we use pretrained 100-dimensional GloVe word embeddings (Pennington et al., 2014). With the attention-based receiver, we consider a set of such GloVe vectors as or, and otherwise, the average of those vectors is used as the representation of a description. 5.2 MODELS AND TRAINING Feedforward Sender When attention is not used, the sender is configured to have a single hidden layer with 256 tanh units. The input os is constructed by concatenating the image vector, the receiver’s message vector, their point-wise difference and point-wise product, after embedding the image and message vectors into the same space by a linear transformation. The attention-based sender uses a single-layer feedforward network with 256 tanh units to compute the attention weights. Recurrent Receiver The receiver is a single hidden-layer recurrent neural network with 64 gated recurrent units. When the receiver is configured to use attention over the words in each description, we use a feedforward network with a single hidden layer of 64 rectified linear units. Baseline Networks The baseline networks Bs and Br are both feedforward networks with a single hidden layer of 500 rectified linear units each. The receiver’s baseline network takes as input the recurrent hidden state ht−1r but does not backpropagate the error gradient through the receiver. Training and Evaluation We train both the sender and receiver as well as associated baseline networks using RMSProp (Tieleman & Hinton, 2012) with learning rate set to 10−4 and minibatches of size 64 each. The coefficients for the entropy regularization, λs and λm, are set to 0.08 and 0.01 respectively, based on the development set performance from the preliminary experiments. Each training run is early-stopped based on the development set accuracy for a maximum of 500 epochs. We evaluate each model on a test set by computing the accuracy@K, where K is set to be 10% of the number of categories in each of the three test sets (K is either 6 or 7, since we always include the classes from training). We use this metric to enable comparison between the different test sets and to avoid overpenalizing predicting similar classes, e.g. kangaroo and wallaby. We set the maximum length of a conversation to be 10, i.e., Tmax = 10. We train on a single GPU (Nvidia Titan X Pascal), and a single experiment takes roughly 8 hours for 500 epochs. Code We used PyTorch [http://pytorch.org]. Our implementation of the agents and instructions on how to build the dataset are available on Github [https://github.com/nyu-dl/MultimodalGame]. 6 RESULTS AND ANALYSIS The model and approach in this paper are differentiated from previous work mainly by: 1) the variable conversation length, 2) the multi-modal nature of the game and 3) the particular nature of the communication protocol, i.e., the messages. In this section, we experimentally examine our setup and specifically test the following hypotheses: • The more difficult or complex the referential game, the more dialogue turns would be needed if humans were to play it. Similarly, we expect the receiver to need more information, and ask more questions, if the problem is more difficult. Hence, we examine the relationship between conversation length and accuracy/difficulty. • As the agents take turns in a continuing conversation, more information becomes available, which implies that the receiver should become more sure about its prediction, even if the problem is difficult to begin with. Thus, we separately examine the confidence of predictions as the conversation progresses. • The agents play very different roles in the game. On the one hand, we would hypothesize the receiver’s messages to become more and more specific. For example, if the receiver has already established that the picture is of a feline, it does not make sense to ask, e.g., whether the animal has tusks or fins. This implies that the entropy of its messages should decrease. On the other hand, as questions become more specific, they are also likely to become more difficult for the sender to answer with high confidence. Answering that something is an aquatic mammal is easier than describing, e.g., the particular shape of a fin. Consequently, the entropy of the sender’s messages is likely to increase as it grows less confident in its answers. To examine this, we analyze the information theoretic content of the messages sent by both agents. In what follows, we discuss experiments along the lines of these hypotheses. In addition, we analyze the impact of changing the message dimensionality, and the effect of applying visual and linguistic attention mechanisms. Conversation length and accuracy/difficulty We train a pair of agents with an adaptive conversation length in which the receiver may terminate the conversation early based on the stop probability. Once training is done, we inspect the relationship between average conversation length and difficulty across classes, as well as the accuracy per the conversation length by partitioning the test examples into length-based bins. We expect that more difficult classes require a higher average length of exchange. To test this hypothesis, we use the accuracy of a separate classifier as a proxy for the difficulty of a sample. Specifically, we train a classifier based on a pre-trained ResNet-50, in which we freeze all but the last layer, and obtain the F1 score per class evaluated on the in-domain test set. The Pearson correlation between the F1 score and average conversation length across classes is −0.81 with a p-value of 4× 10−15 implying a statistically significant negative relationship, as displayed in Fig. 2 (a). In addition, we present the accuracies against the conversation lengths (as automatically determined by the receiver) in Fig. 2 (b). We notice a clear trend with the in-domain test set: examples for which the conversations are shorter are better classified, which might indicate that they are easier. It is important to remember that the receiver’s stop probability is not artificially tied to the performance nor confidence of the receiver’s prediction, but is simply learned by playing the proposed game. A similar trend can be observed with the out-of-domain test set, however, to a lesser degree. A similar trend of having longer conversation for more difficult objects is also found with humans in the game of 20 questions (Cohen & Lake, 2016).2 2 Accuracy scores in relation to the number of questions were obtained via personal communication. Conversation length and confidence With the agents trained with an adaptive conversation length, we can investigate how the prediction uncertainty of the receiver evolves over time. We plot the evolution of the entropy of the prediction distribution in Fig. 3 (a) averaged per conversation length bucket. We first notice that the conversation length, determined by the receiver on its own, correlates well with the prediction confidence (measured as negative entropy) of the receiver. Also, it is clear on the in-domain test set that the entropy almost monotonically decreases over the conversation, and the receiver terminates the conversation when the predictive entropy converges. This trend is however not apparent with the out-of-domain test set, which we attribute to the difficulty of zero-shot generalization. The goal of the conversation, i.e., the series of message exchanges, is to distinguish among many different objects. The initial message from the sender could for example give a rough idea of the high-level category that an object belongs to, after which the goal becomes to distinguish different objects within that high-level category. In other words, objects in a single such cluster, which are visually similar due to the sender’s access to the visual mode of an object, are predicted at different time steps in the conversation. We qualitatively examine this hypothesis by visualizing how the predictive probabilities of the receiver evolve over a conversation. In Fig. 3 (b,c), we show two example categories – kangaroo and wolf. As the conversation progress and more information is gathered for the receiver, similar but incorrect categories receive smaller probabilities than the correct one. We notice a similar trend with all other categories. Information theoretic message content In the previous section, we examined how prediction certainty evolved over time. We can do the same with the messages sent by the respective agents. In Fig. 4, we plot the entropies of the message distributions by the sender and receiver. We notice that, as the conversation progresses, the entropy decreases for the receiver, while it increases for the sender. This observation can be explained by the following conjecture. As the receiver accumulates information transmitted by the sender, the set of possible queries to send back to the sender shrinks, and consequently the entropy decreases. It could be said that the questions become more specific as more information becomes available to the receiver as it zones in on the correct answer. On the other hand, as the receiver’s message becomes more specific and difficult to answer, the certainty of the sender in providing the correct answer decreases, thereby increasing the entropy of the sender’s message distribution. We notice a similar trend on the out-of-domain test set as well. Effect of the message dimensionality Next, we vary the dimensionality d of each message to investigate the impact of the constraint on the communication channel, while keeping the conversation length adaptive. We generally expect a better accuracy with a higher bandwidth. More specifically, we expect the generalization to unseen categories (out-of-domain test) would improve as the information bandwidth of the communication channel increases. When the bandwidth is limited, the agents will be forced to create a communication protocol highly specialized for categories seen during training. On the other hand, the agents will learn to decompose structures underlying visual and textual modes of an object into more generalizable descriptions with a higher bandwidth channel. The accuracies reported in Fig. 5 agree well with this hypothesis. On the in-domain test set, we do not see significant improvement nor degradation as the message dimensionality changes. We observe, however, a strong correlation between the message dimensionality and the accuracy on the out-of-domain test set. With 32-dimensional messages, the agents were able to achieve up to 45% accuracy@7 on the out-of-domain test set which consists of 10 mammals not seen during training. The effect of modifying the message dimension was less clear when measured against the transfer set. Effect of Attention Mechanism All the experiments so far have been run without attention mechanism. We train additional three pairs of agents with 32-dimensional message vectors; (1) attentionbased sender, (2) attention-based receiver, and (3) attention-based sender and attention-based receiver. On the in-domain test set, we are not able to observe any improvement from the attention mechanism on either of the agents. We did however notice that the attention mechanism (attention-based receiver) significantly improves the accuracy on the transfer test set from 16.9% up to 27.4%. We conjecture that this is due to the fact that attention allows the agents to focus on the aspects of the objects (e.g. certain words in descriptions; or regions in images) that they are familiar with, which means that they are less susceptible to the noise introduced from being exposed to an entirely new category. We leave further analysis of the effect of the attention mechanism for future work. Is communication necessary? One important consideration is whether the trained agents utilize the adaptability of the communication protocol. It is indeed possible that the sender does not learn to shape communication and simply relies on the random communication protocol decided by the random initialization of its parameters. In this case, the receiver will need to recover information from the sender sent via this random communication channel. In order to verify this is not the case, we train a pair of agents without updating the parameters of the sender. As the receiver is still updated, and the sender’s information still flows toward the receiver, learning happens. We, however, observe that the overall performance significantly lags behind the case when agents are trained together, as shown in Fig. 6. This suggests that the agents must learn a new, task-specific communication protocol, which emerges in order to solve the problem successfully.3 7 CONCLUSION In this paper, we have proposed a novel, multi-modal, multi-step referential game for building and analyzing communication-based neural agents. The design of the game enables more human-like communication between two agents, by allowing a variable-length conversation with a symmetric communication. The conducted experiments and analyses reveal three interesting properties of the communication protocol, or artificial language, that emerges from learning to play the proposed game. First, the sender and receiver are able to adjust the length of the conversation based on the difficulty of predicting the correct object. The length of the conversation is found to (negatively) correlate with the confidence of the receiver in making predictions. Second, the receiver gradually asks more specific questions as the conversation progresses. This results in an increase of entropy in the sender’s message distribution, as there are more ways to answer those highly specific questions. We further observe that increasing the bandwidth of communication, measured in terms of the message dimensionality, allows for improved zero-shot generalization. Most importantly, we present a suite of hypotheses and associated experiments for investigating an emergent communication protocol, which we believe will be useful for the future research on emergent communication. Future Direction Despite the significant extension we have made to the basic referential game, the proposed multi-modal, multi-step game also exhibits a number of limitations. First, an emergent 3There are additional statistics about the stability of training in Appendix B. communication from this game is not entirely symmetric as there is no constraint that prevents the two agents from partitioning the message space. This could be addressed by having more than two agents interacting with each other while exchanging their roles, which we leave as future work. Second, the message set S consists of fixed-dimensional binary vectors. This choice effectively prevents other linguistic structures, such as syntax. Third, the proposed game, as well as any existing referential game, does not require any action, other than speaking. This is in contrast to the first line of research discussed earlier in Sec. 1, where communication happens among active agents. We anticipate a future research direction in which both of these approaches are combined. ACKNOWLEDGMENTS We thank Brenden Lake and Alex Cohen for valuable discussion. We also thank Maximilian Nickel, Y-Lan Boureau, Jason Weston, Dhruv Batra, and Devi Parikh for helpful suggestions. KC thanks for support by AdeptMind, Tencent, eBay, NVIDIA, and CIFAR. AD thanks the NVIDIA Corporation for their donation of a Titan X Pascal. This work is done by KE as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University. A part of Fig. 1 is licensed from EmmyMik/CC BY 2.0/https://www.flickr.com/photos/emmymik/8206632393/. A TABLE OF NOTATIONS B STABILITY OF TRAINING We ran our standard setup4 six times using different random seeds. For each experiment, we trained the model until convergence using early stopping against the validation data, then measured the loss and accuracy on the in-domain test set. The accuracy@6 had mean of 96.6% with variance of 1.98e−1, the accuracy@1 had mean of 86.0% with variance 7.59e−1, and the loss had mean of 0.611 with variance 2.72e−3. These results suggest that the model is not only effective at classifying images, but also robust to random restart. 4The standard setup uses adaptive conversation lengths with a maximum length of 10 and message dimension of 32. The values of other hyperparameters are described in Section 5.2.
1. What is the focus of the paper regarding language emergence? 2. What are the strengths of the proposed approach, particularly in the reference game setting? 3. What are the weaknesses of the paper, especially in terms of clarity and metric choice? 4. Do you have any questions or suggestions regarding the experimental setup or analysis? 5. Are there any concerns or limitations regarding the applicability of the results?
Review
Review -------------- Summary and Evaluation: -------------- The paper presents a nice set of experiments on language emergence in a mutli-modal, multi-step setting. The multi-modal reference game provides an interesting setting for communication, with agents learning to map descriptions to images. The receiving agent's direct control over dialog length is also novel and allows for the interesting analysis presented in later sections. Overall I think this is an interesting and well-designed work; however, some details are missing that I think would make for a stronger submission (see weaknesses). -------------- Strengths: -------------- - Generally well-written with the Results and Analysis section appearing especially thought-out and nicely presented. - The proposed reference game provides a number of novel contributions -- giving the agents control over dialog length, providing both agents with the same vocabulary without constraints on how each uses it (implicit through pretraining or explicit in the structure/loss), and introducing an asymmetric multi-modal context for the dialog. - The analysis is extensive and well-grounded in the three key hypothesis presented at the beginning of Section 6. -------------- Weaknesses: -------------- - There is room to improve the clarity of Sections 3 and 4 and I encourage the authors to revisit these sections. Some specific suggestions that might help: - numbering all display style equations - when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation. - I did not see an argument in support of the accuracy@K metric. Why is putting the ground truth in the top 10% the appropriate metric in this setting? Is it to enable comparison between the in-domain, out-domain, and transfer settings? - Unless I missed something, the transfer test set results only comes up once in the context of attention methods and are not mentioned elsewhere. Why is this? It seems appropriate to include in Figure 5 if no where else in the analysis. - Do the authors have a sense for how sensitive these results are to different runs of the training process? - I did not understand this line from Section 5.1: "and discarding any image with a category beyond the 398-th most frequent one, as classified by a pretrained ImageNet classifier'" - It is not specified (or I missed it) whether the F1 scores from the separate classifier are from training or test set evaluations. - I would have liked to see analysis on the training process such as a plot of reward (or baseline adjusted reward) over training iterations. - I encourage authors to see the EMNLP 2017 paper "Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols. - Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work. -------------- Curiosities: -------------- - I think the analysis is Figure 3 b,c is interesting and wonder if something similar can be computed over all examples. One option would be to plot accuracy@k for different utterance indexes -- essentially forcing the model to make a prediction after each round of dialog (or simply repeating its prediction if the model has chosen to stop).
ICLR
Title Emergent Communication in a Multi-Modal, Multi-Step Referential Game Abstract Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. N/A Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. 1 INTRODUCTION Recently, there has been a surge of work on neural network-based multi-agent systems that are capable of communicating with each other in order to solve a problem. Two distinct lines of research can be discerned. In the first one, communication is used as an essential tool for sharing information among multiple active agents in a reinforcement learning scenario (Sukhbaatar et al., 2016; Foerster et al., 2016; Mordatch & Abbeel, 2017; Andreas et al., 2017). Each of the active agents is, in addition to its traditional capability of interacting with the environment, able to communicate with other agents. A population of such agents is subsequently jointly tuned to reach a common goal. The main goal of this line of work is to use communication (which may be continuous) as a means to enhance learning in a difficult, sparse-reward environment. The communication may also mimic human conversation, e.g., in settings where agents engage in natural language dialogue based on a shared visual modality (Das et al., 2017; Strub et al., 2017). In contrast, the goal of our work is to learn the communication protocol, and aligns more closely with another line of research, which focuses on investigating and analyzing the emergence of communication in (cooperative) multi-agent referential games (Lewis, 2008; Skyrms, 2010; Steels & Loetzsch, 2012), where one agent (the sender) must communicate what it sees using some discrete emergent communication protocol, while the other agent (the receiver) is tasked with figuring out what the first agent saw. These lines of work are partially motivated by the idea that artificial communication (and other manifestations of machine intelligence) can emerge through interacting with the world and/or other agents, which could then converge towards human language (Gauthier & Mordatch, 2016; Mikolov et al., 2015; Lake et al., 2016; Kiela et al., 2016). (Lazaridou et al., 2016) have recently proposed a basic version of this game, where there is only a single transmission of a message from the sender to the receiver, as a test bed for both inducing and analyzing a communication protocol between two neural network-based agents. A related approach to using a referential game with two agents is proposed by (Andreas & Klein, 2016). (Jorge et al., 2016) have more recently introduced a game similar to the setting above, but with multiple transmissions of messages between the two agents. The sender is, however, strictly limited to sending single bit (yes/no) messages, and the number of exchanges is kept fixed. These earlier works lack two fundamental aspects of human communication in solving cooperative games. First, human information exchange is bidirectional with symmetric communication abilities, and spans exchanges of arbitrary length. In other words, linguistic interaction is not one-way, and can take as long or as short as it needs. Second, the information exchange emerges as a result of a disparity in knowledge or access to information, with the capability of bridging different modalities. For example, a human who has never seen a tiger but knows that it is a “big cat with stripes” would be able to identify one in a picture without effort. That is, humans can identify a previously unseen object from a textual description alone, while agents in previous interaction games have access to the same modality (a picture) and their shared communication protocol. Based on these considerations, we extend the basic referential game used in (Lazaridou et al., 2016; Andreas & Klein, 2016; Jorge et al., 2016) and (Havrylov & Titov, 2017) into a multi-modal, multi-step referential game. Firstly, our two agents, the sender and receiver, are grounded in different modalities: one has access only to the visual modality, while the other has access only to textual information (multi-modal). The sender sees an image and communicates it to the receiver whose job is to determine which object the sender refers to, while only having access to a set of textual descriptions. Secondly, communication is bidirectional and symmetrical, in that both the sender and receiver may send an arbitrary binary vector to each other. Furthermore, we allow the receiver to autonomously decide when to terminate a conversation, which leads to an adaptive-length conversation (multistep). The multi-modal nature of our proposal enforces symmetric, high-bandwidth communication, as it is not enough for the agents to simply exchange the carbon copies of their modalities (e.g. communicating the value of an arbitrary pixel in an image) in order to solve the problem. The multistep nature of our work allows us to train the agents to develop an efficient strategy of communication, implicitly encouraging a shorter conversation for simpler objects and a longer conversation for more complex objects. We evaluate and analyze the proposed multi-modal, multi-step referential game by creating a new dataset consisting of images of mammals and their textual descriptions. The task is somewhat related to recently proposed multi-modal dialogue games, such as that of (de Vries et al., 2016), but then played by agents using their own emergent communication. We build neural network-based sender and receiver, implementing techniques such as visual attention (Xu et al., 2015) and textual attention (Bahdanau et al., 2014). Each agent generates a multi-dimensional binary message at each time step, and the receiver decides whether to terminate the conversation. We train both agents jointly using policy gradient (Williams, 1992). 2 MULTI-MODAL, MULTI-STEP REFERENTIAL GAME Game The proposed multi-modal, multi-step referential game is characterized by a tuple G = 〈S,O,OS , OR, s∗〉. S is a set of all possible messages used for communication by both the sender and receiver. An analogy of S in natural languages would be a set of all possible sentences. Unlike (Jorge et al., 2016), we let S be shared between the two agents, which makes the proposed game a more realistic proxy to natural language conversations where two parties share a single vocabulary. In this paper, we define the set of symbols to be a set of d-dimensional binary vectors, reminiscent of the widely-used bag-of-words representation of a natural language sentence. That is, S = {0, 1}d. O is a set of objects. OS and OR are the sets of two separate views, or modes, of the objects in O, exposed to the sender and receiver, respectively. Due to the variability introduced by the choice of mode, the cardinalities of the latter two sets may differ, i.e., |OS | 6= |OR|, and it is usual for the cardinalities of both OS and OR to be greater than or equal to that of O, i.e., |OS | ≥ |O| and |OR| ≥ |O|. In this paper, for instance, O is a set of selected mammals, and OS and OR are, respectively, images and textual descriptions of those mammals: |OS | |OR| = |O|. The ground-truth map between OS and OR is given as s∗ : OS ×OR → {0, 1} . This function s∗ is used to determine whether elements os ∈ OS and or ∈ OR belong to the same object in O. It returns 1 when they do, and 0 otherwise. At the end of a conversation, the receiver selects an element from OR as an answer, and s∗ is used as a scorer of this particular conversation based on the sender’s object os and the receiver’s prediction ôr. Agents The proposed game is played between two agents, sender AS and receiver AR. A sender is a stochastic function that takes as input the sender’s view of an object os ∈ OS and the message mr ∈ S received from the receiver and outputs a binary message ms ∈ S. That is, AS : OS × S → S. We constrain the sender to be memory-less in order to ensure any message created by the sender is a response to an immediate message sent by the receiver. Unlike the sender, it is necessary for the receiver to possess a memory in order to reason through a series of message exchanges with the sender and make a final prediction. The receiver also has an option to determine whether to terminate the on-going conversation. We thus define the receiver as: AR : S × Rq → Ξ×OR × S × Rq, where Ξ = {0, 1} indicates whether to terminate the conversation. It receives the sender’s message ms ∈ S and its memory h ∈ Rq from the previous step, and stochastically outputs: (1) whether to terminate the conversation s ∈ {0, 1}, (2) its prediction ôr ∈ OR (if decided to terminate) and (3) a message mr ∈ S back to the sender (if decided not to terminate). Play Given G, one game instance is initiated by uniformly selecting an object o from the object set O. A corresponding view os ∈ OS is sampled and given to the sender AS . The whole set OR is provided to the receiver AR. The receiver’s memory and initial message are learned as separate parameters. 3 AGENTS At each time step t ∈ {1, . . . , Tmax}, the sender computes its message mts = AS(os,mt−1r ). This message is then transmitted to the receiver. The receiver updates its memory htr, decides whether to terminate the conversation st, makes its prediction otr, and creates a response: (s t, otr,m t r, h t r) = AR(m t s, h t−1 r ). If s t = 1, the conversation terminates, and the receiver’s prediction otr is used to score this game instance, i.e., s∗(os, otr). Otherwise, this process repeats in the next time step: t← t+ 1. Fig. 1 depicts a single sender-receiver exchange at time step t. Feedforward Sender Let os ∈ OS be a real-valued vector, and mr ∈ S be a d-dimensional binary message. We build a sender AS as a feedforward neural network that outputs a d-dimensional factorized Bernoulli distribution. It first computes the hidden state hs by hs = fs(os,mr), (1) and computes p(ms,j = 1) for all j = 1, . . . , d as p(ms,j = 1) = σ(w > s,jhs + bs,j), where σ is a sigmoid function, and ws,j ∈ Rdim(hs) and bs,j ∈ R are the weight vector and bias, respectively. During training, we sample a sender’s message from this distribution, while during test time we take the most likely message, i.e., ms,j = arg maxb∈{0,1} p(ms,j = b). Attention-based Sender When the view os of an object is given as a set of vectors {os1 , . . . , osn} rather than a single vector, we implement and test an attention mechanism from (Bahdanau et al., 2014; Xu et al., 2015). For each vector in the set, we first compute the attention weight against the received message mr as αj = exp(fs,att(osj ,mr))∑n j′=1 exp(fs,att(osj′ ,mr)) , and take the weighted-sum of the input vectors: õs = ∑n j=1 αjosj . This weighted sum is used instead of os as an input to fs in Eq. (1). Intuitively, this process of attention corresponds to selecting a subset of the sender’s view of an object according to a receiver’s query. Recurrent Receiver Let or ∈ OR be a real-valued vector, and ms ∈ S be a d-dimensional binary message received from the sender. A receiver AR is a recurrent neural network that first updates its memory by htr = fr(m t s, h t−1 r ) ∈ Rq, where fr is a recurrent activation function. We use a gated recurrent unit (GRU, Cho et al., 2014). The initial message from the receiver to the sender, m0r , is learned as a separate parameter. Given the updated memory vector htr, the receiver first computes whether to terminate the conversation. This is done by outputting a stop probability, as in p(st = 1) = σ(w>r,sh t r + br,s), where wr,s ∈ Rq and br,s ∈ R are the weight vector and bias, respectively. The receiver terminates the conversation (st = 1) by either sampling from (during training) or taking the most likely value (during test time) of this distribution. If st = 0, the receiver computes the message distribution similarly to the sender as a d-dimensional factorized Bernoulli distribution: p(mtr,j = 1) = σ(w > r,j tanh ( W>r h t r + U > r ( ∑ or∈OR p(or = 1)gr(or) ) + cr ) + br,j), where gr : Rdim(or) → Rq is a trainable function that embeds or into a q-dimensional real-valued vector space. The second term inside the tanh function ensures that the message generated by the receiver takes into consideration the receiver’s current belief p(or = 1) (see Eq. (2)) on which object the sender is viewing. If st = 1 (terminate), the receiver instead produces its prediction by computing the distribution over all the elements in OR: p(or = 1) = exp(gr(or) >htr)∑ o′r∈OR exp(gr(o′r) >htr) . (2) Again, gr(or) is the embedding of an object o based on the receiver’s view or, similarly to what was proposed by (Larochelle et al., 2008). The receiver’s prediction is given by ôr = arg maxor∈OR p(or = 1), and the entire prediction distribution is used to compute the cross-entropy loss. Attention-based Receiver Similarly to the sender, we can incorporate the attention mechanism in the receiver. This is done at the level of the embedding function gr by modifying it to take as input both the set of vectors or = {or,1, . . . , or,n} and the current memory vector htr. Attention weights over the view vectors are computed against the memory vector, and their weighted sum õr, or its affine transformation to Rq , is returned. 4 TRAINING Both the sender and receiver are jointly trained in order to maximize the score s∗(os, ôr). Our per-instance loss function Li is the sum of the classification loss Lic and the reinforcement learning loss Lir. The classification loss is a usual cross-entropy loss defined as Lic = log p(o ∗ r = 1), where o∗r ∈ OR is the view of the correct object. The reinforcement learning loss is defined as Lir = T∑ t=1 (R−Bs(os,mt−1r )) d∑ j=1 log p(mts,j)︸ ︷︷ ︸ sender + (R−Br(mtr, ht−1r ))(log p(st) + d∑ j=1 log p(mtr,j))︸ ︷︷ ︸ receiver , where R is a reward given by the ground-truth mapping s∗. This reinforcement learning loss corresponds to REINFORCE (Williams, 1992). Bs and Br are baseline estimators for the sender and receiver, respectively, and both of them are trained to predict the final reward R, as suggested by (Mnih & Gregor, 2014): LiB = T∑ t=1 (R−Bs(os,mt−1r ))2 + (R−Br(mts, ht−1r ))2. In order to facilitate the exploration by the sender and receiver during training, we regularize the negative entropies of the sender’s and receiver’s message distributions. We also minimize the negative entropy of the receiver’s termination distribution to encourage the conversation to be of length 1− ( 12 ) Tmax on average. The final per-instance loss can then be written as Li = Lic + L i r − T∑ t=1 ( λsH(s t) + λm d∑ j=1 (H(mts,j) +H(m t r,j)) ) , where H is the entropy, and λs ≥ 0 and λm ≥ 0 are regularization coefficients. We minimize this loss by computing its gradient with respect to the parameters of both the sender and receiver and taking a step toward the opposite direction. We list all the mathematical symbols used in the description of the game in Appendix A. 5 EXPERIMENTAL SETTINGS 5.1 DATA COLLECTION AND PREPROCESSING We collect a new dataset consisting of images and textual descriptions of mammals. We crawl the nodes in the subtree of the “mammal” synset in WordNet (Miller, 1995). For each node, we collect the word o and the corresponding textual description or in order to construct the object set O and the receiver’s view set OR. For each word o, we query Flickr to retrieve as many as 650 images 1. These images form the sender’s view set OS . We sample 70 mammals from the subtree and build three sets from the collected data. First, we keep a subset of sixty mammals for training (550 images per mammal) and set aside data for validation (50 images per mammal) and test (20 images per mammal). This constitutes the in-domain test, that measures how well the model does on mammals that it is familiar with. We use the remaining ten mammals to build an out-of-domain test set (100 images per mammal), which allows us to test the generalization ability of the sender and receiver to unseen objects, and thereby to determine whether the receiver indeed relies on the availability of a different mode from the sender. In addition to the mammals, we build a third test set consisting of 10 different types of insects, rather than mammals. To construct this transfer test, we uniformly select 100 images per insect at random from the ImageNet dataset (Deng et al., 2009), while the descriptions are collected from WordNet, similarly to the mammals. The test is meant to measure an extreme case of zero-shot generalization, to an entirely different category of objects (i.e., insects rather than mammals, and images from ImageNet rather than from Flickr). Image Processing Instead of a raw image, we use features extracted by ResNet-34 (He et al., 2016). With the attention-based sender, we use 64 (8× 8) 512-dimensional feature vectors from the final convolutional layer. Otherwise, we use the 512-dimensional feature vector after average pooling those 64 vectors. We do not fine-tune the network. 1We query Flickr, obtaining more than 650 images per word, then we remove duplicates and use a heuristic to discard undesirables images. Duplicates are detected using dHash (Tantos, 2017). As a heuristic, we take an image classifier that was trained on ImageNet (Krizhevsky et al., 2012), classify each candidate image, and discard an image if its most likely class is not an animal. We randomly select from the remaining images to acquire the desired amount. Text Processing Each description is lowercased. Stopwords are filtered using the Stopwords Corpus included in NLTK (Bird et al., 2009). We treat each description as a bag of unique words by removing any duplicates. The average description length is 9.1 words with a standard deviation of 3.16. Because our dataset is relatively small, especially in the textual mode, we use pretrained 100-dimensional GloVe word embeddings (Pennington et al., 2014). With the attention-based receiver, we consider a set of such GloVe vectors as or, and otherwise, the average of those vectors is used as the representation of a description. 5.2 MODELS AND TRAINING Feedforward Sender When attention is not used, the sender is configured to have a single hidden layer with 256 tanh units. The input os is constructed by concatenating the image vector, the receiver’s message vector, their point-wise difference and point-wise product, after embedding the image and message vectors into the same space by a linear transformation. The attention-based sender uses a single-layer feedforward network with 256 tanh units to compute the attention weights. Recurrent Receiver The receiver is a single hidden-layer recurrent neural network with 64 gated recurrent units. When the receiver is configured to use attention over the words in each description, we use a feedforward network with a single hidden layer of 64 rectified linear units. Baseline Networks The baseline networks Bs and Br are both feedforward networks with a single hidden layer of 500 rectified linear units each. The receiver’s baseline network takes as input the recurrent hidden state ht−1r but does not backpropagate the error gradient through the receiver. Training and Evaluation We train both the sender and receiver as well as associated baseline networks using RMSProp (Tieleman & Hinton, 2012) with learning rate set to 10−4 and minibatches of size 64 each. The coefficients for the entropy regularization, λs and λm, are set to 0.08 and 0.01 respectively, based on the development set performance from the preliminary experiments. Each training run is early-stopped based on the development set accuracy for a maximum of 500 epochs. We evaluate each model on a test set by computing the accuracy@K, where K is set to be 10% of the number of categories in each of the three test sets (K is either 6 or 7, since we always include the classes from training). We use this metric to enable comparison between the different test sets and to avoid overpenalizing predicting similar classes, e.g. kangaroo and wallaby. We set the maximum length of a conversation to be 10, i.e., Tmax = 10. We train on a single GPU (Nvidia Titan X Pascal), and a single experiment takes roughly 8 hours for 500 epochs. Code We used PyTorch [http://pytorch.org]. Our implementation of the agents and instructions on how to build the dataset are available on Github [https://github.com/nyu-dl/MultimodalGame]. 6 RESULTS AND ANALYSIS The model and approach in this paper are differentiated from previous work mainly by: 1) the variable conversation length, 2) the multi-modal nature of the game and 3) the particular nature of the communication protocol, i.e., the messages. In this section, we experimentally examine our setup and specifically test the following hypotheses: • The more difficult or complex the referential game, the more dialogue turns would be needed if humans were to play it. Similarly, we expect the receiver to need more information, and ask more questions, if the problem is more difficult. Hence, we examine the relationship between conversation length and accuracy/difficulty. • As the agents take turns in a continuing conversation, more information becomes available, which implies that the receiver should become more sure about its prediction, even if the problem is difficult to begin with. Thus, we separately examine the confidence of predictions as the conversation progresses. • The agents play very different roles in the game. On the one hand, we would hypothesize the receiver’s messages to become more and more specific. For example, if the receiver has already established that the picture is of a feline, it does not make sense to ask, e.g., whether the animal has tusks or fins. This implies that the entropy of its messages should decrease. On the other hand, as questions become more specific, they are also likely to become more difficult for the sender to answer with high confidence. Answering that something is an aquatic mammal is easier than describing, e.g., the particular shape of a fin. Consequently, the entropy of the sender’s messages is likely to increase as it grows less confident in its answers. To examine this, we analyze the information theoretic content of the messages sent by both agents. In what follows, we discuss experiments along the lines of these hypotheses. In addition, we analyze the impact of changing the message dimensionality, and the effect of applying visual and linguistic attention mechanisms. Conversation length and accuracy/difficulty We train a pair of agents with an adaptive conversation length in which the receiver may terminate the conversation early based on the stop probability. Once training is done, we inspect the relationship between average conversation length and difficulty across classes, as well as the accuracy per the conversation length by partitioning the test examples into length-based bins. We expect that more difficult classes require a higher average length of exchange. To test this hypothesis, we use the accuracy of a separate classifier as a proxy for the difficulty of a sample. Specifically, we train a classifier based on a pre-trained ResNet-50, in which we freeze all but the last layer, and obtain the F1 score per class evaluated on the in-domain test set. The Pearson correlation between the F1 score and average conversation length across classes is −0.81 with a p-value of 4× 10−15 implying a statistically significant negative relationship, as displayed in Fig. 2 (a). In addition, we present the accuracies against the conversation lengths (as automatically determined by the receiver) in Fig. 2 (b). We notice a clear trend with the in-domain test set: examples for which the conversations are shorter are better classified, which might indicate that they are easier. It is important to remember that the receiver’s stop probability is not artificially tied to the performance nor confidence of the receiver’s prediction, but is simply learned by playing the proposed game. A similar trend can be observed with the out-of-domain test set, however, to a lesser degree. A similar trend of having longer conversation for more difficult objects is also found with humans in the game of 20 questions (Cohen & Lake, 2016).2 2 Accuracy scores in relation to the number of questions were obtained via personal communication. Conversation length and confidence With the agents trained with an adaptive conversation length, we can investigate how the prediction uncertainty of the receiver evolves over time. We plot the evolution of the entropy of the prediction distribution in Fig. 3 (a) averaged per conversation length bucket. We first notice that the conversation length, determined by the receiver on its own, correlates well with the prediction confidence (measured as negative entropy) of the receiver. Also, it is clear on the in-domain test set that the entropy almost monotonically decreases over the conversation, and the receiver terminates the conversation when the predictive entropy converges. This trend is however not apparent with the out-of-domain test set, which we attribute to the difficulty of zero-shot generalization. The goal of the conversation, i.e., the series of message exchanges, is to distinguish among many different objects. The initial message from the sender could for example give a rough idea of the high-level category that an object belongs to, after which the goal becomes to distinguish different objects within that high-level category. In other words, objects in a single such cluster, which are visually similar due to the sender’s access to the visual mode of an object, are predicted at different time steps in the conversation. We qualitatively examine this hypothesis by visualizing how the predictive probabilities of the receiver evolve over a conversation. In Fig. 3 (b,c), we show two example categories – kangaroo and wolf. As the conversation progress and more information is gathered for the receiver, similar but incorrect categories receive smaller probabilities than the correct one. We notice a similar trend with all other categories. Information theoretic message content In the previous section, we examined how prediction certainty evolved over time. We can do the same with the messages sent by the respective agents. In Fig. 4, we plot the entropies of the message distributions by the sender and receiver. We notice that, as the conversation progresses, the entropy decreases for the receiver, while it increases for the sender. This observation can be explained by the following conjecture. As the receiver accumulates information transmitted by the sender, the set of possible queries to send back to the sender shrinks, and consequently the entropy decreases. It could be said that the questions become more specific as more information becomes available to the receiver as it zones in on the correct answer. On the other hand, as the receiver’s message becomes more specific and difficult to answer, the certainty of the sender in providing the correct answer decreases, thereby increasing the entropy of the sender’s message distribution. We notice a similar trend on the out-of-domain test set as well. Effect of the message dimensionality Next, we vary the dimensionality d of each message to investigate the impact of the constraint on the communication channel, while keeping the conversation length adaptive. We generally expect a better accuracy with a higher bandwidth. More specifically, we expect the generalization to unseen categories (out-of-domain test) would improve as the information bandwidth of the communication channel increases. When the bandwidth is limited, the agents will be forced to create a communication protocol highly specialized for categories seen during training. On the other hand, the agents will learn to decompose structures underlying visual and textual modes of an object into more generalizable descriptions with a higher bandwidth channel. The accuracies reported in Fig. 5 agree well with this hypothesis. On the in-domain test set, we do not see significant improvement nor degradation as the message dimensionality changes. We observe, however, a strong correlation between the message dimensionality and the accuracy on the out-of-domain test set. With 32-dimensional messages, the agents were able to achieve up to 45% accuracy@7 on the out-of-domain test set which consists of 10 mammals not seen during training. The effect of modifying the message dimension was less clear when measured against the transfer set. Effect of Attention Mechanism All the experiments so far have been run without attention mechanism. We train additional three pairs of agents with 32-dimensional message vectors; (1) attentionbased sender, (2) attention-based receiver, and (3) attention-based sender and attention-based receiver. On the in-domain test set, we are not able to observe any improvement from the attention mechanism on either of the agents. We did however notice that the attention mechanism (attention-based receiver) significantly improves the accuracy on the transfer test set from 16.9% up to 27.4%. We conjecture that this is due to the fact that attention allows the agents to focus on the aspects of the objects (e.g. certain words in descriptions; or regions in images) that they are familiar with, which means that they are less susceptible to the noise introduced from being exposed to an entirely new category. We leave further analysis of the effect of the attention mechanism for future work. Is communication necessary? One important consideration is whether the trained agents utilize the adaptability of the communication protocol. It is indeed possible that the sender does not learn to shape communication and simply relies on the random communication protocol decided by the random initialization of its parameters. In this case, the receiver will need to recover information from the sender sent via this random communication channel. In order to verify this is not the case, we train a pair of agents without updating the parameters of the sender. As the receiver is still updated, and the sender’s information still flows toward the receiver, learning happens. We, however, observe that the overall performance significantly lags behind the case when agents are trained together, as shown in Fig. 6. This suggests that the agents must learn a new, task-specific communication protocol, which emerges in order to solve the problem successfully.3 7 CONCLUSION In this paper, we have proposed a novel, multi-modal, multi-step referential game for building and analyzing communication-based neural agents. The design of the game enables more human-like communication between two agents, by allowing a variable-length conversation with a symmetric communication. The conducted experiments and analyses reveal three interesting properties of the communication protocol, or artificial language, that emerges from learning to play the proposed game. First, the sender and receiver are able to adjust the length of the conversation based on the difficulty of predicting the correct object. The length of the conversation is found to (negatively) correlate with the confidence of the receiver in making predictions. Second, the receiver gradually asks more specific questions as the conversation progresses. This results in an increase of entropy in the sender’s message distribution, as there are more ways to answer those highly specific questions. We further observe that increasing the bandwidth of communication, measured in terms of the message dimensionality, allows for improved zero-shot generalization. Most importantly, we present a suite of hypotheses and associated experiments for investigating an emergent communication protocol, which we believe will be useful for the future research on emergent communication. Future Direction Despite the significant extension we have made to the basic referential game, the proposed multi-modal, multi-step game also exhibits a number of limitations. First, an emergent 3There are additional statistics about the stability of training in Appendix B. communication from this game is not entirely symmetric as there is no constraint that prevents the two agents from partitioning the message space. This could be addressed by having more than two agents interacting with each other while exchanging their roles, which we leave as future work. Second, the message set S consists of fixed-dimensional binary vectors. This choice effectively prevents other linguistic structures, such as syntax. Third, the proposed game, as well as any existing referential game, does not require any action, other than speaking. This is in contrast to the first line of research discussed earlier in Sec. 1, where communication happens among active agents. We anticipate a future research direction in which both of these approaches are combined. ACKNOWLEDGMENTS We thank Brenden Lake and Alex Cohen for valuable discussion. We also thank Maximilian Nickel, Y-Lan Boureau, Jason Weston, Dhruv Batra, and Devi Parikh for helpful suggestions. KC thanks for support by AdeptMind, Tencent, eBay, NVIDIA, and CIFAR. AD thanks the NVIDIA Corporation for their donation of a Titan X Pascal. This work is done by KE as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University. A part of Fig. 1 is licensed from EmmyMik/CC BY 2.0/https://www.flickr.com/photos/emmymik/8206632393/. A TABLE OF NOTATIONS B STABILITY OF TRAINING We ran our standard setup4 six times using different random seeds. For each experiment, we trained the model until convergence using early stopping against the validation data, then measured the loss and accuracy on the in-domain test set. The accuracy@6 had mean of 96.6% with variance of 1.98e−1, the accuracy@1 had mean of 86.0% with variance 7.59e−1, and the loss had mean of 0.611 with variance 2.72e−3. These results suggest that the model is not only effective at classifying images, but also robust to random restart. 4The standard setup uses adaptive conversation lengths with a maximum length of 10 and message dimension of 32. The values of other hyperparameters are described in Section 5.2.
1. What is the focus of the paper in terms of multi-modal communication? 2. What are the strengths of the proposed approach, particularly in its extension and experimentation? 3. What are the weaknesses of the paper regarding its reliance on a single dataset? 4. Are there any other datasets that can be used to support or refute the findings of the paper? 5. How does the reviewer assess the overall quality and impact of the paper's contributions?
Review
Review The paper proposes a new multi-modal, multi-step reference game, where the sender has access to visual data and the receiver has access to textual messages, and also the conversation can be terminated by the receiver when proper. Later, the paper describes their idea and extension in details and reports comprehensive experiment results of a number of hypotheses. The research questions seems straightforward, but it is good to see those experiments review some interesting points. One thing I am bit concerned is that the results are based on a single dataset. Do we have other datasets that can be used? The authors also lay out further several research directions. Overall, I think this paper is easy to read and good.
ICLR
Title Computation-Efficient Quantization Method for Deep Neural Networks Abstract Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. N/A Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. 1 INTRODUCTION Different variants of Deep Neural Networks have achieved state-of-the-art results in various domains from computer vision to language processing (Krizhevsky et al., 2012; Ren et al., 2015; Vaswani et al., 2017; Cho et al., 2014). However, newer models are becoming more memory and computation intensive to achieve performance improvements. For example, winner of ILSVRC 2015 ResNet (He et al., 2016) increased the number of layers by over 4x to gain less than 2% Top-1 accuracy improvement on ImageNet (Russakovsky et al., 2015). Compression techniques, such as knowledge distillation, pruning, low rank approximation and quantization, have been proposed to reduce the model size (Vapnik & Izmailov, 2015; Han et al., 2015; Sainath et al., 2013; Courbariaux et al., 2015). These compression techniques are evolving the field of model compression towards the goal of deploying DNN models on mobile-phone and other embedded devices. Courbariaux et al. (2015) proposed the widely used technique for training quantized neural networks, where binary weights are used during forward and backward propagation, while full precision weights are preserved for accumulating gradients. Binary weights are approximated from full precision weights every iteration. (Zhou et al., 2018; 2017a) have proposed incremental quantization training procedure where the range for the weights is incrementally reduced. Choi et al. (2017) use Hessian weighted k-means clustering for quantization. Lee & Kim (2018) used iterative procedure of quantizing, de-quantizing and complete retraining of the full precision model, performed multiple times. All the techniques aimed to reduce the quantization error (error between full precision model and corresponding quantized model). However, most of these quantization techniques either adds extra set of hyper-parameters or modifies/lengthens the original training procedure. Designing a neural network model consists of two main steps - choose a proper architecture/model given the characteristics of the task, and optimize the hyper-parameters for convergence and accuracy. Hyper-parameter search varies from number of layers in a model (Zoph et al., 2018) to learning rate (lr), batch-size (bs) combo (compare ResNet (He et al., 2016) and Inception (Szegedy et al., 2016) networks with altogether different hyper-parameter set). Courbariaux et al. (2015) requires updating the back-propagation procedure to train quantized networks. (Zhou et al., 2018; Lee & Kim, 2018) require very long training time because of multiple iterations of training and extra introduced hyper-parameters. Over time, focus on reducing the model size and the corresponding inference latency has led to either lengthening or major modifications to training procedure. Our Proposed quantization technique addresses these issues resulting in easy adoption of our technique. Our contributions include but are not limited to • A simple quantization training method based on re-training without requiring major modifications to the original training procedure. Training consists of two phases: phase1 trains the full precision model (with quantization) and phase2 trains the binary model constructed by phase1. • Reduce the overhead of expensive quantization techniques as quantization is performed only every few steps (specifically once every 500 iterations for the experiments). • Maintained the total number of iterations and time required to train the quantized network compared to the full precision network. • Achieve full precision accuracy for WikiText-2 and CIFAR dataset with 2-bit and 1-bit quantization respectively. Present a hybrid 1.5 bits LSTM models for WikiText-2 outperforming TWN LSTM model. Achieve performance comparable to existing works for ImageNet. 2 RELATED WORK Quantization. Courbariaux et al. (2015) proposed the idea of training binary neural networks with quantized weights. Rastegari et al. (2016) introduced shared scaling factors to allow more range for binary values. (Hubara et al., 2016; Zhou et al., 2016; Lin et al., 2017; McDonnell, 2018; Hubara et al., 2018) built upon the training methodology along with introduction of binary activation units. Lee & Kim (2018) performs full precision retraining multiple times to train a quantized network. Ternary quantization was proposed (Zhu et al., 2017; Li et al., 2016; Wang et al., 2018) to mitigate the gap between full precision and quantized weight networks. Let W ∈ Rk×c×f×f represent a weight of a convolution layer l with total n elements where k, c, f represents output channels, input channels and size of the filter respectively. Rastegari et al. (2016) splits W into binary weight B ∈ {−1,+1}k×c×f×f and scaling factor α ∈ R+k shared per output, where B = sign(W) α = 〈B,W〉/n (1) obtained by minimizing ‖W − αB‖2. Binary quantization is extended to ternary where B ∈ {−1, 0,+1}n×c×k×k. Ternary quantization introduces a threshold factor 4l to assign the ternary value to a weight. (Li et al., 2016; Zhu et al., 2017; Wang et al., 2018) have proposed various methodologies to evaluate the threshold. Lee & Kim (2018) performed ternary quantization by combining pruning and binary quantization. Binary quantization was extended to multi-bit quantization using a greedy methodology by Guo et al. (2017). For k-bit quantization, minimizing ‖Ŵi −αiBi‖ for ith bit quantization resulted in, Bi = sign(Ŵi) αi = 〈Bi,Ŵi〉/n where Ŵi = W− i−1∑ j=1 αjBj (2) referred as the greedy approach. Greedy approach was improved by refined method, where αi is computed by using ((BTi Bi)−1B T i W)T . Xu et al. (2018) improved refined method by performing a binary search on the given refined α set and alternately evaluatingα and B. Low precision networks have also been proposed to reduce the gap with quantized activation units (Zhuang et al. (2018)). Quantization has also been applied to RNNs and Long Short Term Memory (LSTM) models as well (Hou et al. (2017); Guo et al. (2017); Zhou et al. (2017b); Xu et al. (2018); Lee & Kim (2018)). We use greedy quantization in this work due to its simple operations (although alternating yields the better results at the cost of higher computation overhead). Next section describes our quantization training procedure in detail. 3 ITERATIVE QUANTIZATION Choromanska et al. (2015) shows that minima of high quality (measured by test accuracy) for largesize networks occur in a well-defined band. Choromanska et al. (2015) also conjectured that training using methods like stochastic gradient descent, simulated annealing converges to a minimum in the band. Minima in the band can have varying flatness, where flat minima have smaller error introduced to the accuracy upon adding distortion to the weight (Hochreiter & Schmidhuber (1995)). However, exploring through multiple minima has been a challenging task. Simulated annealing1 (Kirkpatrick et al., 1983) explores through various minima, but does not aim to find wider minima. Motivated from simulated annealing, we propose a training technique to enable exploration of wider minimum among multiple minima. The technique allows escaping from relatively sharper minima and aims to find wider minima in the band. Our training procedure consists of two phases. Phase1 trains the full precision network with quantization. Phase2 fine-tunes the binary network obtained from phase1. 3.1 PHASE1: STEP TRAINING The goal of phase1 is to produce a full precision model with optimized B and reduced quantization error (error between full precision and quantized model). Phase1 does not modify the original training procedure. Instead, in addition to the original training, phase1 just adds an extra distortion step using quantization (referred as quantized-distortion step), performed once every few iterations. Applying quantized-distortion to the weights consists of 3 parts - quantize the weights of each layer (quantization), convert the quantized weights back to full precision format (de-quantization), and update the full precision model with the de-quantized weights. Quantized-distortion is performed once every Quantized Step Size (QSS) iterations. Original training procedure combined with quantization-distortion is referred as Step Training (Figure 1a). Step training is performed in phase1 until the convergence of training (in principle). Full precision training of the network for QSS iterations explores the curvature of the local convex surface of the current local minimum. Applying quantized-distortion post-training moves the model to the quantized lattice point on the training contour. Suppose that the quantized lattice point exist outside the curvature around a sharp minimum. Then, the network escapes such sharper minima in phase1. In contrast to existing quantization training methods, step training does not store quantized weights. Instead, step training updates and replaces full precision weights with their quantized version every few iterations. Quantization Step Size. QSS determines the amount of retraining to be done between two quantized-distortion steps. QSS needs to be big enough to compensate for the error added by the distortion and let the network explore the current local curvature. However, QSS should not be too large to diverge the weights far away from a nearby quantized lattice point. Comparing big vs 1Simulated Annealing explores neighbors of the current solution in a randomized order. Worse neighboring solutions are also explored with relatively high probability (temperature) in the start (exploration). Temperature is reduced over time and later, only nearby better solutions are explored (exploitation). small QSS - big QSS allows the weights to explore farther allowing the binary representation of the weights to change (weights need large amount of updates to change their sign). On the other hand, small QSS allows the training to exploit the current local curvature and fine-tune α. We observe that for a training procedure with fixed learning rate, starting with a big QSS and reducing QSS over the training period results in better convergence. However, the same behavior can be approximated with a fixed QSS and a varying learning rate. Figure 1b shows the movement of accuracy using step training with step-wise reducing learning rate and fixed QSS for ResNet32 on CIFAR-10. Larger learning rate enables larger amount of updates (and hence more curvature exploration) given the same gradient from the back propagation (fluctuations in the accuracy). On the other hand, smaller learning rate helps exploit (fine-tune the parameters inside the current minimum) as shown by a smoother rise in accuracy. Hence, we use fixed QSS (500 iterations) in the rest of the manuscript, although one can use varying QSS for further fine-tuning. Convergence of B. We observe that B converges earlier compared to α during step training. Such an observation is demonstrated in our experiment with step training for CIFAR-10 with ResNet32. Figure 2 shows the movement of weight with step training2. Initially, the sign bits of the weight flip frequently (with higher learning rate). However, with smaller learning rate (after 80K iterations for CIFAR-10), B do not change and only α is optimized. Convergence of α. Let W be a tensor of full precision weights with n elements. W is quantized into B (binary tensor) and α (shared scaling factors). Step training updates W every iteration. On the other hand, (B,α) are calculated every QSS iterations. Let 4W denote the total update accumulated for W since the last distortion step. Let4α denote the change between the new α and the α calculated at previous distortion step. For binary quantization, the updated α is given by- α+4α = 1 n ∑ |W +4W| (3) where |.| is the absolute function. Starting from a common absolute quantized value (α), the weights sharing the same α update independently (∀i, j wi, wj ∈ W, ∂wi/∂wj = 0). With large QSS, as the weights diverge from α, the update for α becomes inefficient and noisy. Although phase1 results in optimized B, phase1 does not completely optimize α (within the limited number of iterations). Need for improved convergence of α forms the motivation for phase2. 3.2 PHASE2: α TRAINING Phase2 starts by converting the full precision trained model from phase1 to the corresponding binary model. The full precision weights W in phase1 are replaced with the corresponding binary version, α and B in the model. B is fixed and only α is trained. Phase2 only constructs the binary model 2All the figures are presented with 160k iterations, with learning rate decayed by 0.1 every 40k steps to investigate the effects of 4 different learning rates. and does not construct the full precision model. Phase2 is faster compared to phase1 due to fewer training parameters, use of binary weights, and no quantized-distortion step. Phase2 is performed with a smaller learning rate after the bit-flips do not occur anymore in phase1. Similar to phase1, phase2 also uses the original training procedure but with fewer number of trainable parameters. In the complete training procedure, we first perform phase1 followed by phase2. The trained binary model at the end of phase2 represents the output of the complete training procedure. This complete training procedure uses the same number of iterations as that in the original training procedure. As a result, the total training time combining phase1 and phase2 is equivalent to the original full precision training time. 3.3 SPECIAL CARE FOR CNNS This section compares different ways to apply quantization on a 4D tensor kernel. Quantization for a 2D weight matrix W with n elements outputs a binary 2D matrix B ∈ {−1,+1}n and shared scaling factor α per row of W . All the weights in a row share the same α, where α ∈ α. Further, a row in the matrix can be split into t sub-rows (referred as tables), where each table has a different α. For quantizing a 4D tensor kernel W ∈ Rk×c×f×f in a convolution layer, W is reshaped to 2D matrix W ∈ Rk×cff (Rastegari et al., 2016) (k is the number of output features, c is the number of inputs features and f is the filter size). There are total k α, each shared by c× f × f number of weights. Each output feature in the convolution layer has c × f × f weights. Thus, there is only 1 α shared by all the weights for an output feature. As the quantized weights can only take the value of {+α,−α}, all the inputs for an output feature can only be weighted by the same absolute factor α. Hence, the representative power of the quantized network is limited. To alleviate this problem, we convert the 4D tensor W to 2D matrix differently. W is transposed to f × k × c× f and then reshaped to 2D matrix Rf×kcf (referred as skewed matrix). Next, each row of size k × c × f is split into k/f sub-rows. Each sub-row has a different α. Total number of α remains the same as above (k). Furthermore, the inputs for an output feature can now be weighted by f number of unique α. Note that some α will be shared among different output features as well. In our experiments, skewed matrix shows better results. Section 5.1 shows the benefit in accuracy using the skewed matrix for quantization. 4 K-MEANS LOSS AND SHUFFLING This section aims to limit the divergence of weights from each other during training to get more accurate estimation of α. L2 regularization (in the form of L2 loss) is frequently used in training of neural networks. L2 loss prevents the weights from exploding and suppresses the magnitude of the weights. However, L2 loss does apply any restriction on the variance of the weights. As α is obtained using Equation 1, higher variance in weights results in higher quantization error. We aim to reduce quantization error by introducing LKM loss function to reduce the variance of the weights. Let Wi represent all the weights in a layer i. Letw ∈Wi be a subset of weights sharing common α (w are referred as clusters from now). The new loss is represented as: LKM = c ‖W‖0 ∑ ∀w∈W ‖w − avg(|w|)‖2 (4) where c is a constant, avg(.) computes the average of the inputs. LKM divides the weights W into different clusters w with common α and limits the divergence of weights from the cluster average of its absolute values α. LKM restricts the independent movement of weights. Similar with L2 loss, a diverged weight increases the LKM . In addition, a diverged weight also shifts the cluster average, increasing the LKM loss further. Thus LKM encourages a lower variance in the weights with common α and improve quantization as a result. The constant factor c is set to be the same as weight decay rate (the constant for L2 loss is reduced to mitigate the effect of L2). Shuffling. LKM can be extended for multiple tables, where a row of a quantized matrix has multiple shared α. With multiple tables, let w correspond to a subset of row, where the subset shares the common α. LKM helps in better approximation for α by forcing a predetermined group of weights to exhibit low variance. We could also achieve a better approximation for α by re-arranging the weights so that similar values are grouped together. Note that rearranging is applicable for DNNs with multiple tables only. Let the weight matrix between two fully connected layers li and li−1 be represented byW i,i−1. Two nodes in a layer li given as lij and l i k can be swapped by switching the rows j, k of weight matrix W i,i−1 and the columns j, k of W i+1,i. The layout of the weight matrix can be set to cluster the desired weights without impacting the output of the network. Swapping the nodes in layer li, li−1 swaps the rows and columns of W i,i−1 respectively. Thus, nodes in each layer can be swapped independently. K-means clustering is used to find an optimized configuration of weights in terms of grouping similar values together first. And then the nodes in the layer are swapped to enforce such configuration, reducing the quantization error. The methodology of finding and applying the optimal swapping configuration is termed as shuffling of nodes. Because applying shuffling of nodes to the current layer requires a layer before and after the current layer, we introduce a shuffle layer in the start and end of the network to allow shuffling in first and last layer of the network. The shuffle layer stores the shuffle configuration and behaves as a mapping layer. The overhead of the shuffle layer is less than 1% of the model size (same as the size of bias in a layer). Shuffling is applied in the weight distortion step in phase1, when quantizing the weights. 5 EXPERIMENTS Experiments are performed with CNNs and RNNs to show the effectiveness of the proposed method. All the experiments are performed using Tensorflow (Abadi et al., 2016) using 2 Titan X GPUs. Full precision models for CNNs are obtained from tensorflow models repository3, while RNN models are obtained from Verwimp et al. (2017)4. We quantize all the layers of the network unless specified otherwise. Iterative quantization training is performed on pre-trained full precision models. Greedy quantization using Equation 1 is used for all the experiments. Quantization Step Size is set to 500 constant throughout all the experiments. 5.1 CIFAR ResNet32 is trained on CIFAR-10 (Krizhevsky, 2009) for 90k iterations, where the first 60k iterations are performed using step training and remaining 30k iterations are performed with α training. 60% pruning rate is set for ternary quantization. Training ResNet32 using our proposed quantization training method does not incur any increase in training time. Table 1 shows the improvement by combining the techniques discussed in previous sections in an incremental manner for ResNet32. We use the k α for quantization following Rastegari et al. (2016) as default quantization mode (k is the number of output features for a convolution layer). Different QSS schedules were tried where QSS starts with a high value and is reduced over the training procedure. QSS schedule produces better accuracy compared to fixed QSS for step training. Note that, however, α training eliminates the need to fine-tune over the QSS and achieves the same accuracy. Number of tables for skewed mode have been set to have the same model size as default mode (resulting in the same number of α). Table 2 provides our final accuracy for CIFAR dataset with all the techniques combined using ResNet32 and WideResNet 28-10 models (70x bigger model size compared to ResNet32). Our model provides similar performance compared to TTQ (Zhu et al., 2017) using ResNet32. Our method achieves full precision accuracy for WideResNet 28-10 on both CIFAR-10 and CIFAR-100, compared to the results by McDonnell (2018) without changing the training procedure. We believe that WideResNet demonstrates smaller quantization error than ResNet32 because Li et al. (2018) reported that wider networks facilitates flatter minima. 5.2 WIKITEXT-2 LSTM model with 1 layer consisting of 512 nodes is used for WikiText-2 (Merity et al., 2017) dataset, with same hyper-parameter settings as followed by Xu et al. (2018). Performance is mea- 3https://github.com/tensorflow/models/ 4https://github.com/lverwimp/tf-lm sured with Perplexity Per Word metric (PPW). Full precision PPW is 100.2. Activation quantization requires quantization to be performed every iteration for inference, wiping out the speed up obtained with quantized weights for inference. Activation quantization slows down training as well. Thus, we use 32bit activations while 3bit activations used by Xu et al. (2018). Our 2-bit alternating quantization (greedy quantization replaced with alternating quantization in our proposed method) reaches full precision PPW (Table 3). Table 3 compares the accuracy for multi-table models (multiple α per row of the matrix to be quantized) with TWN model (TWN method from Li et al. (2016) combined with our training method). Our multi-table model (8 tables for Embedding and Softmax layer, 16 tables for LSTM layer) combined with LKM loss function generates PPW equivalent to TWN PPW. Multi-table model accounts to 1.5 bits per weight in total (after accumulating all the α and B). Applying LKM reduces the model size by 25% from TWN (2 bit) to 1.5 bits with equivalent PPW. We also perform 1 bit quantization reaching 128.18 PPW. Hybrid LSTM. In the WikiText-2 LSTM model, Embedding and Softmax layer each forms over 45% of the full precision model size. Therefore, we selectively optimize the number of quantization bits for each layer to achieve higher compression rate. 1 bit quantization was found to be sufficient for Embedding layer. However, other layers required more number of bits. We fix 1bit quantization for Embedding layer (1 table per row), TWN for Softmax layer and vary the number of quantization bits for LSTM layer. Using 2bit for LSTM layer (1.53 bits per weight in total) provides PPW better than our TWN greedy model, with 25% smaller model size. 5.3 ABLATION STUDY Random Initialization. The distribution of bit-flips and convergence of accuracy for step training with randomly initialized model and with pre-trained model is observed to be similar (Figure 2). The accuracy gap between Binary Step Training model with pre-trained model and BST without pre-trained model less than 0.1% (Table 1). One-Step Quantization. We experiment the degradation in accuracy with just one-step quantization (no retraining). ResNet32 with full precision accuracy of 92.47% on CIFAR-10 produces 44.33% accuracy. To examine the potential of LKM , ResNet32 is again trained with random initialization in full precision mode with LKM (without any form of quantization). Although, the full precision accuracy drops to 91.8%, one-step quantization accuracy goes up to 76.32%. Increasing the regularization constant for LKM yields the one-step quantization accuracy as 84.51%. Robustness. Table 4 shows the robustness of iterative quantization with varying QSS over 2x in the order of magnitudes. As explained in section 3.1, varying learning rate can provide the same functionality as varying QSS. As most of the modern neural networks use special learning rate policy (such as exponential decay, step-wise decay), the training procedure is overall robust to the choice of QSS. The simplicity of the algorithm and robustness to the added hyper-parameter facilitate quick adoption of our proposed technique. 5.4 IMAGENET Full precision ResNet18 is trained on ImageNet (Russakovsky et al., 2015) following the base repository3 with batch size of 256. Our binary model reaches 60.6% Top1 accuracy compared to full precision accuracy of 69.6% (Table 5). Our model shows comparable accuracy compared to existing quantization methods. We believe our model can reach higher accuracy by using layer-by-layer quantization as done by Zhou et al. (2018). 5.5 QUANTIZATION OVERHEAD Let training time per iteration be defined as the combined time to perform forward-propagation and back-propagation on a batch of data. We evaluate the overhead of performing a quantization step once relative to the defined training time per iteration. All the timings are averaged over 1000 iterations, averaged over ResNet32 and WideResNet. We observed that overhead of using greedy quantization is the lowest (8% and 12% of the training time for 1bit and 2bit quantization). More sophisticated quantization methods using regression or iterative procedures, namely refined and alternating quantization, have overhead of 5x and 40x respectively over the training time. Table 3 compares the benefit of using these quantization methods, where alternating quantization shows the best performance despite the biggest overhead. As our training method, unlike existing methods, performs quantization once every 500 iterations, the overhead of the quantization is reduced by 500x. As a result, the overhead of the most expensive quantization remains to be 10% of the training time. 6 CONCLUSION In this work, we have presented an iterative quantization technique performing quantization once every few steps, combined with binary model α training. Step training explores flatter minima while escaping sharp minima and α training performs exploitation of the chosen minima. We also presented a loss function LKM which allows weights to be adjusted for improved quantization. We demonstrated full precision accuracy recovery with CIFAR and WikiText-2 dataset with our quantized models. We also presented a hybrid model with 1.5 bits performing better than the our TWN model. B TRAINING DETAILS We provide more details on the training procedure for the networks for all the datasets. B.1 CIFAR CIFAR dataset consists of 50000 training images with 10000 test images. Each image is of size 32x32. CIFAR-10 dataset classifies the corpus of images into 10 disjoint classes. CIFAR-100 classifies the image set into 100 fine-grained disjoint classes. ResNet. ResNet32 and WideResNet 28-10 were both trained for 90k iterations with batch size of 128. Step-wise decay learning schedule was used. With initial learning rate of 0.1, learning rate was decayed with 0.1 at 40k, 60k and 80k iterations each. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Training was pre-processed with random cropping and random horizontal flipping. Evaluation data was pre-processed with a single central crop only. Quantization Step Size was set as 500 during step training. Pruning. ResNet32 was pruned with 60% as the final sparsity, with an initial sparsity of 20%. Pruning was started with a pre-trained model. Pruning was gradually increased at en exponential rate (exponential factor of 3) with the pruning being performed every 100 iterations. Re-training for pruning for performed for 40k iterations. B.2 WIKITEXT-2 WikiText-2 contains 2088k training, 217k validation and 245k test tokens, with a vocabulary of 33k words. The model for Wikitext-2 consisted of 1 LSTM layer with 512 units. Initial learning rate was set as 20. Learning rate was decayed by 1.2 every 2 epochs. Training was terminated once the learning rate was less than 0.001 or maximum of 80 epochs was reached. The absolute gradient norm was set at 0.25. The network was unrolled for 30 time steps. Training was performed with a dropout ratio of 0.5. Weights were clipped to an absolute maximum of 1.0. Quantization Step Size was set as 500 during step training. Divergence with Greedy Quantization. Using greedy quantization with 2bit quantization for LSTM model always diverged the training with WikiText-2 dataset. To make the model converge up to some extent, we used 1bit quantized model as an initialization for 2 bit quantization. Although, 1bit initialized model converges for a few epochs but also diverges after 10-15 epochs. The results reported in Table 3 for 2 bit greedy quantization follow initializing with 1bit quantized model. The divergence in network with greedy quantization is the reason for using TWN for Softmax layer (and not 2bit quantization) in our hybrid model. B.3 IMAGENET ImageNet consists of 1281176 training images with 50000 validation images, classified into 1000 classes. ResNet18 network was used for training. Training data was pre-processed with random cropping and random horizontal flipping. However, validation data was pre-processed with 1 single central crop. Step-wise decay learning rate schedule was followed with initial learning rate of 0.1 and decayed at epochs 30, 60, 80 and 90 by a factor of 0.1. The complete training procedure was performed for 100 epochs with a batch size of 256. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Quantization Step Size was set as 500 during step training. B.4 BATCH NORMALIZATION Batch normalization (Ioffe & Szegedy, 2015) parameters (µ and σ) are updated using moving average. Consequently, the effect of quantized-distortion performed even 100 iterations earlier would have less than 1% effect on the BN parameters (with momentum=0.99). As a result, BN parameters are not suited well for quantized model and results in drop in evaluation accuracy for the quantized model. To avoid the drop in evaluation accuracy by BN, the BN parameters are re-evaluated over 1 train epoch (keeping the other parameters fixed) before performing evaluation for the phase1. Phase2 does not require any special care for batch normalization as there is no distortion step.
1. What is the focus of the paper regarding neural network quantization? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How effective are the modifications proposed in the paper, such as the L2 regularization and shuffling approach? 4. Do the experimental results support the effectiveness of the proposed method? 5. How does the reviewer assess the novelty and significance of the paper's contributions?
Review
Review This work addresses the issue of quantization for neural network, and in particular focus on Ternary weight networks. The proposed approach has two phases, the first phase performs quantization and de-quantization at certain iterations during training, where the schedule of these operations are hyperparameters specified a priori. The second phase focuses on training the scaling factor. The first phase is similar to the iterative quantization method proposed in “Retraining-Based Iterative Weight Quantization for Deep Neural Networks”, and differs in that this work performs the quantization and de-quantization operations more frequently. This work also proposed a modified version of L2 regularization, but it’s not clear how much benefit it provides compared to a regular L2 regularization. There is also a shuffling approach, but seems to provide limited improvement. The experiment results in general does not provide convincing evidence that the proposed method outperforms existing approaches. For example, the ResNet-32 on CIFAR-10 result does not perform better than the one reported in “Trained ternary quantization”, and the ImageNet result is also worse than some existing works. The work is lack of novelty and the results do not show significant improvement over existing approaches.
ICLR
Title Computation-Efficient Quantization Method for Deep Neural Networks Abstract Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. N/A Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. 1 INTRODUCTION Different variants of Deep Neural Networks have achieved state-of-the-art results in various domains from computer vision to language processing (Krizhevsky et al., 2012; Ren et al., 2015; Vaswani et al., 2017; Cho et al., 2014). However, newer models are becoming more memory and computation intensive to achieve performance improvements. For example, winner of ILSVRC 2015 ResNet (He et al., 2016) increased the number of layers by over 4x to gain less than 2% Top-1 accuracy improvement on ImageNet (Russakovsky et al., 2015). Compression techniques, such as knowledge distillation, pruning, low rank approximation and quantization, have been proposed to reduce the model size (Vapnik & Izmailov, 2015; Han et al., 2015; Sainath et al., 2013; Courbariaux et al., 2015). These compression techniques are evolving the field of model compression towards the goal of deploying DNN models on mobile-phone and other embedded devices. Courbariaux et al. (2015) proposed the widely used technique for training quantized neural networks, where binary weights are used during forward and backward propagation, while full precision weights are preserved for accumulating gradients. Binary weights are approximated from full precision weights every iteration. (Zhou et al., 2018; 2017a) have proposed incremental quantization training procedure where the range for the weights is incrementally reduced. Choi et al. (2017) use Hessian weighted k-means clustering for quantization. Lee & Kim (2018) used iterative procedure of quantizing, de-quantizing and complete retraining of the full precision model, performed multiple times. All the techniques aimed to reduce the quantization error (error between full precision model and corresponding quantized model). However, most of these quantization techniques either adds extra set of hyper-parameters or modifies/lengthens the original training procedure. Designing a neural network model consists of two main steps - choose a proper architecture/model given the characteristics of the task, and optimize the hyper-parameters for convergence and accuracy. Hyper-parameter search varies from number of layers in a model (Zoph et al., 2018) to learning rate (lr), batch-size (bs) combo (compare ResNet (He et al., 2016) and Inception (Szegedy et al., 2016) networks with altogether different hyper-parameter set). Courbariaux et al. (2015) requires updating the back-propagation procedure to train quantized networks. (Zhou et al., 2018; Lee & Kim, 2018) require very long training time because of multiple iterations of training and extra introduced hyper-parameters. Over time, focus on reducing the model size and the corresponding inference latency has led to either lengthening or major modifications to training procedure. Our Proposed quantization technique addresses these issues resulting in easy adoption of our technique. Our contributions include but are not limited to • A simple quantization training method based on re-training without requiring major modifications to the original training procedure. Training consists of two phases: phase1 trains the full precision model (with quantization) and phase2 trains the binary model constructed by phase1. • Reduce the overhead of expensive quantization techniques as quantization is performed only every few steps (specifically once every 500 iterations for the experiments). • Maintained the total number of iterations and time required to train the quantized network compared to the full precision network. • Achieve full precision accuracy for WikiText-2 and CIFAR dataset with 2-bit and 1-bit quantization respectively. Present a hybrid 1.5 bits LSTM models for WikiText-2 outperforming TWN LSTM model. Achieve performance comparable to existing works for ImageNet. 2 RELATED WORK Quantization. Courbariaux et al. (2015) proposed the idea of training binary neural networks with quantized weights. Rastegari et al. (2016) introduced shared scaling factors to allow more range for binary values. (Hubara et al., 2016; Zhou et al., 2016; Lin et al., 2017; McDonnell, 2018; Hubara et al., 2018) built upon the training methodology along with introduction of binary activation units. Lee & Kim (2018) performs full precision retraining multiple times to train a quantized network. Ternary quantization was proposed (Zhu et al., 2017; Li et al., 2016; Wang et al., 2018) to mitigate the gap between full precision and quantized weight networks. Let W ∈ Rk×c×f×f represent a weight of a convolution layer l with total n elements where k, c, f represents output channels, input channels and size of the filter respectively. Rastegari et al. (2016) splits W into binary weight B ∈ {−1,+1}k×c×f×f and scaling factor α ∈ R+k shared per output, where B = sign(W) α = 〈B,W〉/n (1) obtained by minimizing ‖W − αB‖2. Binary quantization is extended to ternary where B ∈ {−1, 0,+1}n×c×k×k. Ternary quantization introduces a threshold factor 4l to assign the ternary value to a weight. (Li et al., 2016; Zhu et al., 2017; Wang et al., 2018) have proposed various methodologies to evaluate the threshold. Lee & Kim (2018) performed ternary quantization by combining pruning and binary quantization. Binary quantization was extended to multi-bit quantization using a greedy methodology by Guo et al. (2017). For k-bit quantization, minimizing ‖Ŵi −αiBi‖ for ith bit quantization resulted in, Bi = sign(Ŵi) αi = 〈Bi,Ŵi〉/n where Ŵi = W− i−1∑ j=1 αjBj (2) referred as the greedy approach. Greedy approach was improved by refined method, where αi is computed by using ((BTi Bi)−1B T i W)T . Xu et al. (2018) improved refined method by performing a binary search on the given refined α set and alternately evaluatingα and B. Low precision networks have also been proposed to reduce the gap with quantized activation units (Zhuang et al. (2018)). Quantization has also been applied to RNNs and Long Short Term Memory (LSTM) models as well (Hou et al. (2017); Guo et al. (2017); Zhou et al. (2017b); Xu et al. (2018); Lee & Kim (2018)). We use greedy quantization in this work due to its simple operations (although alternating yields the better results at the cost of higher computation overhead). Next section describes our quantization training procedure in detail. 3 ITERATIVE QUANTIZATION Choromanska et al. (2015) shows that minima of high quality (measured by test accuracy) for largesize networks occur in a well-defined band. Choromanska et al. (2015) also conjectured that training using methods like stochastic gradient descent, simulated annealing converges to a minimum in the band. Minima in the band can have varying flatness, where flat minima have smaller error introduced to the accuracy upon adding distortion to the weight (Hochreiter & Schmidhuber (1995)). However, exploring through multiple minima has been a challenging task. Simulated annealing1 (Kirkpatrick et al., 1983) explores through various minima, but does not aim to find wider minima. Motivated from simulated annealing, we propose a training technique to enable exploration of wider minimum among multiple minima. The technique allows escaping from relatively sharper minima and aims to find wider minima in the band. Our training procedure consists of two phases. Phase1 trains the full precision network with quantization. Phase2 fine-tunes the binary network obtained from phase1. 3.1 PHASE1: STEP TRAINING The goal of phase1 is to produce a full precision model with optimized B and reduced quantization error (error between full precision and quantized model). Phase1 does not modify the original training procedure. Instead, in addition to the original training, phase1 just adds an extra distortion step using quantization (referred as quantized-distortion step), performed once every few iterations. Applying quantized-distortion to the weights consists of 3 parts - quantize the weights of each layer (quantization), convert the quantized weights back to full precision format (de-quantization), and update the full precision model with the de-quantized weights. Quantized-distortion is performed once every Quantized Step Size (QSS) iterations. Original training procedure combined with quantization-distortion is referred as Step Training (Figure 1a). Step training is performed in phase1 until the convergence of training (in principle). Full precision training of the network for QSS iterations explores the curvature of the local convex surface of the current local minimum. Applying quantized-distortion post-training moves the model to the quantized lattice point on the training contour. Suppose that the quantized lattice point exist outside the curvature around a sharp minimum. Then, the network escapes such sharper minima in phase1. In contrast to existing quantization training methods, step training does not store quantized weights. Instead, step training updates and replaces full precision weights with their quantized version every few iterations. Quantization Step Size. QSS determines the amount of retraining to be done between two quantized-distortion steps. QSS needs to be big enough to compensate for the error added by the distortion and let the network explore the current local curvature. However, QSS should not be too large to diverge the weights far away from a nearby quantized lattice point. Comparing big vs 1Simulated Annealing explores neighbors of the current solution in a randomized order. Worse neighboring solutions are also explored with relatively high probability (temperature) in the start (exploration). Temperature is reduced over time and later, only nearby better solutions are explored (exploitation). small QSS - big QSS allows the weights to explore farther allowing the binary representation of the weights to change (weights need large amount of updates to change their sign). On the other hand, small QSS allows the training to exploit the current local curvature and fine-tune α. We observe that for a training procedure with fixed learning rate, starting with a big QSS and reducing QSS over the training period results in better convergence. However, the same behavior can be approximated with a fixed QSS and a varying learning rate. Figure 1b shows the movement of accuracy using step training with step-wise reducing learning rate and fixed QSS for ResNet32 on CIFAR-10. Larger learning rate enables larger amount of updates (and hence more curvature exploration) given the same gradient from the back propagation (fluctuations in the accuracy). On the other hand, smaller learning rate helps exploit (fine-tune the parameters inside the current minimum) as shown by a smoother rise in accuracy. Hence, we use fixed QSS (500 iterations) in the rest of the manuscript, although one can use varying QSS for further fine-tuning. Convergence of B. We observe that B converges earlier compared to α during step training. Such an observation is demonstrated in our experiment with step training for CIFAR-10 with ResNet32. Figure 2 shows the movement of weight with step training2. Initially, the sign bits of the weight flip frequently (with higher learning rate). However, with smaller learning rate (after 80K iterations for CIFAR-10), B do not change and only α is optimized. Convergence of α. Let W be a tensor of full precision weights with n elements. W is quantized into B (binary tensor) and α (shared scaling factors). Step training updates W every iteration. On the other hand, (B,α) are calculated every QSS iterations. Let 4W denote the total update accumulated for W since the last distortion step. Let4α denote the change between the new α and the α calculated at previous distortion step. For binary quantization, the updated α is given by- α+4α = 1 n ∑ |W +4W| (3) where |.| is the absolute function. Starting from a common absolute quantized value (α), the weights sharing the same α update independently (∀i, j wi, wj ∈ W, ∂wi/∂wj = 0). With large QSS, as the weights diverge from α, the update for α becomes inefficient and noisy. Although phase1 results in optimized B, phase1 does not completely optimize α (within the limited number of iterations). Need for improved convergence of α forms the motivation for phase2. 3.2 PHASE2: α TRAINING Phase2 starts by converting the full precision trained model from phase1 to the corresponding binary model. The full precision weights W in phase1 are replaced with the corresponding binary version, α and B in the model. B is fixed and only α is trained. Phase2 only constructs the binary model 2All the figures are presented with 160k iterations, with learning rate decayed by 0.1 every 40k steps to investigate the effects of 4 different learning rates. and does not construct the full precision model. Phase2 is faster compared to phase1 due to fewer training parameters, use of binary weights, and no quantized-distortion step. Phase2 is performed with a smaller learning rate after the bit-flips do not occur anymore in phase1. Similar to phase1, phase2 also uses the original training procedure but with fewer number of trainable parameters. In the complete training procedure, we first perform phase1 followed by phase2. The trained binary model at the end of phase2 represents the output of the complete training procedure. This complete training procedure uses the same number of iterations as that in the original training procedure. As a result, the total training time combining phase1 and phase2 is equivalent to the original full precision training time. 3.3 SPECIAL CARE FOR CNNS This section compares different ways to apply quantization on a 4D tensor kernel. Quantization for a 2D weight matrix W with n elements outputs a binary 2D matrix B ∈ {−1,+1}n and shared scaling factor α per row of W . All the weights in a row share the same α, where α ∈ α. Further, a row in the matrix can be split into t sub-rows (referred as tables), where each table has a different α. For quantizing a 4D tensor kernel W ∈ Rk×c×f×f in a convolution layer, W is reshaped to 2D matrix W ∈ Rk×cff (Rastegari et al., 2016) (k is the number of output features, c is the number of inputs features and f is the filter size). There are total k α, each shared by c× f × f number of weights. Each output feature in the convolution layer has c × f × f weights. Thus, there is only 1 α shared by all the weights for an output feature. As the quantized weights can only take the value of {+α,−α}, all the inputs for an output feature can only be weighted by the same absolute factor α. Hence, the representative power of the quantized network is limited. To alleviate this problem, we convert the 4D tensor W to 2D matrix differently. W is transposed to f × k × c× f and then reshaped to 2D matrix Rf×kcf (referred as skewed matrix). Next, each row of size k × c × f is split into k/f sub-rows. Each sub-row has a different α. Total number of α remains the same as above (k). Furthermore, the inputs for an output feature can now be weighted by f number of unique α. Note that some α will be shared among different output features as well. In our experiments, skewed matrix shows better results. Section 5.1 shows the benefit in accuracy using the skewed matrix for quantization. 4 K-MEANS LOSS AND SHUFFLING This section aims to limit the divergence of weights from each other during training to get more accurate estimation of α. L2 regularization (in the form of L2 loss) is frequently used in training of neural networks. L2 loss prevents the weights from exploding and suppresses the magnitude of the weights. However, L2 loss does apply any restriction on the variance of the weights. As α is obtained using Equation 1, higher variance in weights results in higher quantization error. We aim to reduce quantization error by introducing LKM loss function to reduce the variance of the weights. Let Wi represent all the weights in a layer i. Letw ∈Wi be a subset of weights sharing common α (w are referred as clusters from now). The new loss is represented as: LKM = c ‖W‖0 ∑ ∀w∈W ‖w − avg(|w|)‖2 (4) where c is a constant, avg(.) computes the average of the inputs. LKM divides the weights W into different clusters w with common α and limits the divergence of weights from the cluster average of its absolute values α. LKM restricts the independent movement of weights. Similar with L2 loss, a diverged weight increases the LKM . In addition, a diverged weight also shifts the cluster average, increasing the LKM loss further. Thus LKM encourages a lower variance in the weights with common α and improve quantization as a result. The constant factor c is set to be the same as weight decay rate (the constant for L2 loss is reduced to mitigate the effect of L2). Shuffling. LKM can be extended for multiple tables, where a row of a quantized matrix has multiple shared α. With multiple tables, let w correspond to a subset of row, where the subset shares the common α. LKM helps in better approximation for α by forcing a predetermined group of weights to exhibit low variance. We could also achieve a better approximation for α by re-arranging the weights so that similar values are grouped together. Note that rearranging is applicable for DNNs with multiple tables only. Let the weight matrix between two fully connected layers li and li−1 be represented byW i,i−1. Two nodes in a layer li given as lij and l i k can be swapped by switching the rows j, k of weight matrix W i,i−1 and the columns j, k of W i+1,i. The layout of the weight matrix can be set to cluster the desired weights without impacting the output of the network. Swapping the nodes in layer li, li−1 swaps the rows and columns of W i,i−1 respectively. Thus, nodes in each layer can be swapped independently. K-means clustering is used to find an optimized configuration of weights in terms of grouping similar values together first. And then the nodes in the layer are swapped to enforce such configuration, reducing the quantization error. The methodology of finding and applying the optimal swapping configuration is termed as shuffling of nodes. Because applying shuffling of nodes to the current layer requires a layer before and after the current layer, we introduce a shuffle layer in the start and end of the network to allow shuffling in first and last layer of the network. The shuffle layer stores the shuffle configuration and behaves as a mapping layer. The overhead of the shuffle layer is less than 1% of the model size (same as the size of bias in a layer). Shuffling is applied in the weight distortion step in phase1, when quantizing the weights. 5 EXPERIMENTS Experiments are performed with CNNs and RNNs to show the effectiveness of the proposed method. All the experiments are performed using Tensorflow (Abadi et al., 2016) using 2 Titan X GPUs. Full precision models for CNNs are obtained from tensorflow models repository3, while RNN models are obtained from Verwimp et al. (2017)4. We quantize all the layers of the network unless specified otherwise. Iterative quantization training is performed on pre-trained full precision models. Greedy quantization using Equation 1 is used for all the experiments. Quantization Step Size is set to 500 constant throughout all the experiments. 5.1 CIFAR ResNet32 is trained on CIFAR-10 (Krizhevsky, 2009) for 90k iterations, where the first 60k iterations are performed using step training and remaining 30k iterations are performed with α training. 60% pruning rate is set for ternary quantization. Training ResNet32 using our proposed quantization training method does not incur any increase in training time. Table 1 shows the improvement by combining the techniques discussed in previous sections in an incremental manner for ResNet32. We use the k α for quantization following Rastegari et al. (2016) as default quantization mode (k is the number of output features for a convolution layer). Different QSS schedules were tried where QSS starts with a high value and is reduced over the training procedure. QSS schedule produces better accuracy compared to fixed QSS for step training. Note that, however, α training eliminates the need to fine-tune over the QSS and achieves the same accuracy. Number of tables for skewed mode have been set to have the same model size as default mode (resulting in the same number of α). Table 2 provides our final accuracy for CIFAR dataset with all the techniques combined using ResNet32 and WideResNet 28-10 models (70x bigger model size compared to ResNet32). Our model provides similar performance compared to TTQ (Zhu et al., 2017) using ResNet32. Our method achieves full precision accuracy for WideResNet 28-10 on both CIFAR-10 and CIFAR-100, compared to the results by McDonnell (2018) without changing the training procedure. We believe that WideResNet demonstrates smaller quantization error than ResNet32 because Li et al. (2018) reported that wider networks facilitates flatter minima. 5.2 WIKITEXT-2 LSTM model with 1 layer consisting of 512 nodes is used for WikiText-2 (Merity et al., 2017) dataset, with same hyper-parameter settings as followed by Xu et al. (2018). Performance is mea- 3https://github.com/tensorflow/models/ 4https://github.com/lverwimp/tf-lm sured with Perplexity Per Word metric (PPW). Full precision PPW is 100.2. Activation quantization requires quantization to be performed every iteration for inference, wiping out the speed up obtained with quantized weights for inference. Activation quantization slows down training as well. Thus, we use 32bit activations while 3bit activations used by Xu et al. (2018). Our 2-bit alternating quantization (greedy quantization replaced with alternating quantization in our proposed method) reaches full precision PPW (Table 3). Table 3 compares the accuracy for multi-table models (multiple α per row of the matrix to be quantized) with TWN model (TWN method from Li et al. (2016) combined with our training method). Our multi-table model (8 tables for Embedding and Softmax layer, 16 tables for LSTM layer) combined with LKM loss function generates PPW equivalent to TWN PPW. Multi-table model accounts to 1.5 bits per weight in total (after accumulating all the α and B). Applying LKM reduces the model size by 25% from TWN (2 bit) to 1.5 bits with equivalent PPW. We also perform 1 bit quantization reaching 128.18 PPW. Hybrid LSTM. In the WikiText-2 LSTM model, Embedding and Softmax layer each forms over 45% of the full precision model size. Therefore, we selectively optimize the number of quantization bits for each layer to achieve higher compression rate. 1 bit quantization was found to be sufficient for Embedding layer. However, other layers required more number of bits. We fix 1bit quantization for Embedding layer (1 table per row), TWN for Softmax layer and vary the number of quantization bits for LSTM layer. Using 2bit for LSTM layer (1.53 bits per weight in total) provides PPW better than our TWN greedy model, with 25% smaller model size. 5.3 ABLATION STUDY Random Initialization. The distribution of bit-flips and convergence of accuracy for step training with randomly initialized model and with pre-trained model is observed to be similar (Figure 2). The accuracy gap between Binary Step Training model with pre-trained model and BST without pre-trained model less than 0.1% (Table 1). One-Step Quantization. We experiment the degradation in accuracy with just one-step quantization (no retraining). ResNet32 with full precision accuracy of 92.47% on CIFAR-10 produces 44.33% accuracy. To examine the potential of LKM , ResNet32 is again trained with random initialization in full precision mode with LKM (without any form of quantization). Although, the full precision accuracy drops to 91.8%, one-step quantization accuracy goes up to 76.32%. Increasing the regularization constant for LKM yields the one-step quantization accuracy as 84.51%. Robustness. Table 4 shows the robustness of iterative quantization with varying QSS over 2x in the order of magnitudes. As explained in section 3.1, varying learning rate can provide the same functionality as varying QSS. As most of the modern neural networks use special learning rate policy (such as exponential decay, step-wise decay), the training procedure is overall robust to the choice of QSS. The simplicity of the algorithm and robustness to the added hyper-parameter facilitate quick adoption of our proposed technique. 5.4 IMAGENET Full precision ResNet18 is trained on ImageNet (Russakovsky et al., 2015) following the base repository3 with batch size of 256. Our binary model reaches 60.6% Top1 accuracy compared to full precision accuracy of 69.6% (Table 5). Our model shows comparable accuracy compared to existing quantization methods. We believe our model can reach higher accuracy by using layer-by-layer quantization as done by Zhou et al. (2018). 5.5 QUANTIZATION OVERHEAD Let training time per iteration be defined as the combined time to perform forward-propagation and back-propagation on a batch of data. We evaluate the overhead of performing a quantization step once relative to the defined training time per iteration. All the timings are averaged over 1000 iterations, averaged over ResNet32 and WideResNet. We observed that overhead of using greedy quantization is the lowest (8% and 12% of the training time for 1bit and 2bit quantization). More sophisticated quantization methods using regression or iterative procedures, namely refined and alternating quantization, have overhead of 5x and 40x respectively over the training time. Table 3 compares the benefit of using these quantization methods, where alternating quantization shows the best performance despite the biggest overhead. As our training method, unlike existing methods, performs quantization once every 500 iterations, the overhead of the quantization is reduced by 500x. As a result, the overhead of the most expensive quantization remains to be 10% of the training time. 6 CONCLUSION In this work, we have presented an iterative quantization technique performing quantization once every few steps, combined with binary model α training. Step training explores flatter minima while escaping sharp minima and α training performs exploitation of the chosen minima. We also presented a loss function LKM which allows weights to be adjusted for improved quantization. We demonstrated full precision accuracy recovery with CIFAR and WikiText-2 dataset with our quantized models. We also presented a hybrid model with 1.5 bits performing better than the our TWN model. B TRAINING DETAILS We provide more details on the training procedure for the networks for all the datasets. B.1 CIFAR CIFAR dataset consists of 50000 training images with 10000 test images. Each image is of size 32x32. CIFAR-10 dataset classifies the corpus of images into 10 disjoint classes. CIFAR-100 classifies the image set into 100 fine-grained disjoint classes. ResNet. ResNet32 and WideResNet 28-10 were both trained for 90k iterations with batch size of 128. Step-wise decay learning schedule was used. With initial learning rate of 0.1, learning rate was decayed with 0.1 at 40k, 60k and 80k iterations each. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Training was pre-processed with random cropping and random horizontal flipping. Evaluation data was pre-processed with a single central crop only. Quantization Step Size was set as 500 during step training. Pruning. ResNet32 was pruned with 60% as the final sparsity, with an initial sparsity of 20%. Pruning was started with a pre-trained model. Pruning was gradually increased at en exponential rate (exponential factor of 3) with the pruning being performed every 100 iterations. Re-training for pruning for performed for 40k iterations. B.2 WIKITEXT-2 WikiText-2 contains 2088k training, 217k validation and 245k test tokens, with a vocabulary of 33k words. The model for Wikitext-2 consisted of 1 LSTM layer with 512 units. Initial learning rate was set as 20. Learning rate was decayed by 1.2 every 2 epochs. Training was terminated once the learning rate was less than 0.001 or maximum of 80 epochs was reached. The absolute gradient norm was set at 0.25. The network was unrolled for 30 time steps. Training was performed with a dropout ratio of 0.5. Weights were clipped to an absolute maximum of 1.0. Quantization Step Size was set as 500 during step training. Divergence with Greedy Quantization. Using greedy quantization with 2bit quantization for LSTM model always diverged the training with WikiText-2 dataset. To make the model converge up to some extent, we used 1bit quantized model as an initialization for 2 bit quantization. Although, 1bit initialized model converges for a few epochs but also diverges after 10-15 epochs. The results reported in Table 3 for 2 bit greedy quantization follow initializing with 1bit quantized model. The divergence in network with greedy quantization is the reason for using TWN for Softmax layer (and not 2bit quantization) in our hybrid model. B.3 IMAGENET ImageNet consists of 1281176 training images with 50000 validation images, classified into 1000 classes. ResNet18 network was used for training. Training data was pre-processed with random cropping and random horizontal flipping. However, validation data was pre-processed with 1 single central crop. Step-wise decay learning rate schedule was followed with initial learning rate of 0.1 and decayed at epochs 30, 60, 80 and 90 by a factor of 0.1. The complete training procedure was performed for 100 epochs with a batch size of 256. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Quantization Step Size was set as 500 during step training. B.4 BATCH NORMALIZATION Batch normalization (Ioffe & Szegedy, 2015) parameters (µ and σ) are updated using moving average. Consequently, the effect of quantized-distortion performed even 100 iterations earlier would have less than 1% effect on the BN parameters (with momentum=0.99). As a result, BN parameters are not suited well for quantized model and results in drop in evaluation accuracy for the quantized model. To avoid the drop in evaluation accuracy by BN, the BN parameters are re-evaluated over 1 train epoch (keeping the other parameters fixed) before performing evaluation for the phase1. Phase2 does not require any special care for batch normalization as there is no distortion step.
1. What is the focus of the paper, and what are the claimed contributions? 2. What are the concerns regarding the convergence difference between B and \alpha? 3. Why does the reviewer think that the motivation for using another phase to train \alpha is not strong? 4. How does the reviewer assess the novelty of the iterative quantization approach? 5. What are the weaknesses of the experimental section, particularly in terms of performance comparison and lacking results?
Review
Review The paper is a little hard to follow and some parts are poorly written. While the authors claim that they use the greedy approach (in sec 2) for quantization where both B and \alpha are learned in a greedy way, it is not clear why there is convergence difference between the two as claimed by the authors in section 3.1. Moreover, the authors claimed faster convergence of B than \alpha because fewer bit clips are observed from the left subplot of Figure 2. However, this conclusion is not quite convincing because 1) on the right subplot of Figure 2, it seems that \alpha also becomes more stable after 80k iterations; 2) the fewer bit clips may comes from using a stepwise learning rate decay scheme. Thus, the motivation for using another phase to train the \alpha is not strong. The iterative quantization approach has limited novelty. It is similar to many quantization methods like BinaryConnect and Xnor-Net, except that the quantization step is not done immediately after the BP and model updates, but after some iterations of full-precision training. Moreover, these methods also use full-precision weights for update during training. Clarity in the experiments section is a little better than the previous sections. However, - The proposed method only performs comparably with TTQ, and shows significant accuracy drop on the Cifar-10 and Cifar-100 datasets (especially on Cifar-100) - On the ImageNet dataset, there is a large accuracy drop of the proposed method compared to Zhou et al. (2018). Though the authors said that they believe their proposed model can reach a higher accuracy by using layer-by-layer quantization as in Zhou et al. (2018), it is hard to verify this claim due to lack of the corresponding results. Thus, efficacy of the proposed method on large datasets or models are hard to evaluate.
ICLR
Title Computation-Efficient Quantization Method for Deep Neural Networks Abstract Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. N/A Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. 1 INTRODUCTION Different variants of Deep Neural Networks have achieved state-of-the-art results in various domains from computer vision to language processing (Krizhevsky et al., 2012; Ren et al., 2015; Vaswani et al., 2017; Cho et al., 2014). However, newer models are becoming more memory and computation intensive to achieve performance improvements. For example, winner of ILSVRC 2015 ResNet (He et al., 2016) increased the number of layers by over 4x to gain less than 2% Top-1 accuracy improvement on ImageNet (Russakovsky et al., 2015). Compression techniques, such as knowledge distillation, pruning, low rank approximation and quantization, have been proposed to reduce the model size (Vapnik & Izmailov, 2015; Han et al., 2015; Sainath et al., 2013; Courbariaux et al., 2015). These compression techniques are evolving the field of model compression towards the goal of deploying DNN models on mobile-phone and other embedded devices. Courbariaux et al. (2015) proposed the widely used technique for training quantized neural networks, where binary weights are used during forward and backward propagation, while full precision weights are preserved for accumulating gradients. Binary weights are approximated from full precision weights every iteration. (Zhou et al., 2018; 2017a) have proposed incremental quantization training procedure where the range for the weights is incrementally reduced. Choi et al. (2017) use Hessian weighted k-means clustering for quantization. Lee & Kim (2018) used iterative procedure of quantizing, de-quantizing and complete retraining of the full precision model, performed multiple times. All the techniques aimed to reduce the quantization error (error between full precision model and corresponding quantized model). However, most of these quantization techniques either adds extra set of hyper-parameters or modifies/lengthens the original training procedure. Designing a neural network model consists of two main steps - choose a proper architecture/model given the characteristics of the task, and optimize the hyper-parameters for convergence and accuracy. Hyper-parameter search varies from number of layers in a model (Zoph et al., 2018) to learning rate (lr), batch-size (bs) combo (compare ResNet (He et al., 2016) and Inception (Szegedy et al., 2016) networks with altogether different hyper-parameter set). Courbariaux et al. (2015) requires updating the back-propagation procedure to train quantized networks. (Zhou et al., 2018; Lee & Kim, 2018) require very long training time because of multiple iterations of training and extra introduced hyper-parameters. Over time, focus on reducing the model size and the corresponding inference latency has led to either lengthening or major modifications to training procedure. Our Proposed quantization technique addresses these issues resulting in easy adoption of our technique. Our contributions include but are not limited to • A simple quantization training method based on re-training without requiring major modifications to the original training procedure. Training consists of two phases: phase1 trains the full precision model (with quantization) and phase2 trains the binary model constructed by phase1. • Reduce the overhead of expensive quantization techniques as quantization is performed only every few steps (specifically once every 500 iterations for the experiments). • Maintained the total number of iterations and time required to train the quantized network compared to the full precision network. • Achieve full precision accuracy for WikiText-2 and CIFAR dataset with 2-bit and 1-bit quantization respectively. Present a hybrid 1.5 bits LSTM models for WikiText-2 outperforming TWN LSTM model. Achieve performance comparable to existing works for ImageNet. 2 RELATED WORK Quantization. Courbariaux et al. (2015) proposed the idea of training binary neural networks with quantized weights. Rastegari et al. (2016) introduced shared scaling factors to allow more range for binary values. (Hubara et al., 2016; Zhou et al., 2016; Lin et al., 2017; McDonnell, 2018; Hubara et al., 2018) built upon the training methodology along with introduction of binary activation units. Lee & Kim (2018) performs full precision retraining multiple times to train a quantized network. Ternary quantization was proposed (Zhu et al., 2017; Li et al., 2016; Wang et al., 2018) to mitigate the gap between full precision and quantized weight networks. Let W ∈ Rk×c×f×f represent a weight of a convolution layer l with total n elements where k, c, f represents output channels, input channels and size of the filter respectively. Rastegari et al. (2016) splits W into binary weight B ∈ {−1,+1}k×c×f×f and scaling factor α ∈ R+k shared per output, where B = sign(W) α = 〈B,W〉/n (1) obtained by minimizing ‖W − αB‖2. Binary quantization is extended to ternary where B ∈ {−1, 0,+1}n×c×k×k. Ternary quantization introduces a threshold factor 4l to assign the ternary value to a weight. (Li et al., 2016; Zhu et al., 2017; Wang et al., 2018) have proposed various methodologies to evaluate the threshold. Lee & Kim (2018) performed ternary quantization by combining pruning and binary quantization. Binary quantization was extended to multi-bit quantization using a greedy methodology by Guo et al. (2017). For k-bit quantization, minimizing ‖Ŵi −αiBi‖ for ith bit quantization resulted in, Bi = sign(Ŵi) αi = 〈Bi,Ŵi〉/n where Ŵi = W− i−1∑ j=1 αjBj (2) referred as the greedy approach. Greedy approach was improved by refined method, where αi is computed by using ((BTi Bi)−1B T i W)T . Xu et al. (2018) improved refined method by performing a binary search on the given refined α set and alternately evaluatingα and B. Low precision networks have also been proposed to reduce the gap with quantized activation units (Zhuang et al. (2018)). Quantization has also been applied to RNNs and Long Short Term Memory (LSTM) models as well (Hou et al. (2017); Guo et al. (2017); Zhou et al. (2017b); Xu et al. (2018); Lee & Kim (2018)). We use greedy quantization in this work due to its simple operations (although alternating yields the better results at the cost of higher computation overhead). Next section describes our quantization training procedure in detail. 3 ITERATIVE QUANTIZATION Choromanska et al. (2015) shows that minima of high quality (measured by test accuracy) for largesize networks occur in a well-defined band. Choromanska et al. (2015) also conjectured that training using methods like stochastic gradient descent, simulated annealing converges to a minimum in the band. Minima in the band can have varying flatness, where flat minima have smaller error introduced to the accuracy upon adding distortion to the weight (Hochreiter & Schmidhuber (1995)). However, exploring through multiple minima has been a challenging task. Simulated annealing1 (Kirkpatrick et al., 1983) explores through various minima, but does not aim to find wider minima. Motivated from simulated annealing, we propose a training technique to enable exploration of wider minimum among multiple minima. The technique allows escaping from relatively sharper minima and aims to find wider minima in the band. Our training procedure consists of two phases. Phase1 trains the full precision network with quantization. Phase2 fine-tunes the binary network obtained from phase1. 3.1 PHASE1: STEP TRAINING The goal of phase1 is to produce a full precision model with optimized B and reduced quantization error (error between full precision and quantized model). Phase1 does not modify the original training procedure. Instead, in addition to the original training, phase1 just adds an extra distortion step using quantization (referred as quantized-distortion step), performed once every few iterations. Applying quantized-distortion to the weights consists of 3 parts - quantize the weights of each layer (quantization), convert the quantized weights back to full precision format (de-quantization), and update the full precision model with the de-quantized weights. Quantized-distortion is performed once every Quantized Step Size (QSS) iterations. Original training procedure combined with quantization-distortion is referred as Step Training (Figure 1a). Step training is performed in phase1 until the convergence of training (in principle). Full precision training of the network for QSS iterations explores the curvature of the local convex surface of the current local minimum. Applying quantized-distortion post-training moves the model to the quantized lattice point on the training contour. Suppose that the quantized lattice point exist outside the curvature around a sharp minimum. Then, the network escapes such sharper minima in phase1. In contrast to existing quantization training methods, step training does not store quantized weights. Instead, step training updates and replaces full precision weights with their quantized version every few iterations. Quantization Step Size. QSS determines the amount of retraining to be done between two quantized-distortion steps. QSS needs to be big enough to compensate for the error added by the distortion and let the network explore the current local curvature. However, QSS should not be too large to diverge the weights far away from a nearby quantized lattice point. Comparing big vs 1Simulated Annealing explores neighbors of the current solution in a randomized order. Worse neighboring solutions are also explored with relatively high probability (temperature) in the start (exploration). Temperature is reduced over time and later, only nearby better solutions are explored (exploitation). small QSS - big QSS allows the weights to explore farther allowing the binary representation of the weights to change (weights need large amount of updates to change their sign). On the other hand, small QSS allows the training to exploit the current local curvature and fine-tune α. We observe that for a training procedure with fixed learning rate, starting with a big QSS and reducing QSS over the training period results in better convergence. However, the same behavior can be approximated with a fixed QSS and a varying learning rate. Figure 1b shows the movement of accuracy using step training with step-wise reducing learning rate and fixed QSS for ResNet32 on CIFAR-10. Larger learning rate enables larger amount of updates (and hence more curvature exploration) given the same gradient from the back propagation (fluctuations in the accuracy). On the other hand, smaller learning rate helps exploit (fine-tune the parameters inside the current minimum) as shown by a smoother rise in accuracy. Hence, we use fixed QSS (500 iterations) in the rest of the manuscript, although one can use varying QSS for further fine-tuning. Convergence of B. We observe that B converges earlier compared to α during step training. Such an observation is demonstrated in our experiment with step training for CIFAR-10 with ResNet32. Figure 2 shows the movement of weight with step training2. Initially, the sign bits of the weight flip frequently (with higher learning rate). However, with smaller learning rate (after 80K iterations for CIFAR-10), B do not change and only α is optimized. Convergence of α. Let W be a tensor of full precision weights with n elements. W is quantized into B (binary tensor) and α (shared scaling factors). Step training updates W every iteration. On the other hand, (B,α) are calculated every QSS iterations. Let 4W denote the total update accumulated for W since the last distortion step. Let4α denote the change between the new α and the α calculated at previous distortion step. For binary quantization, the updated α is given by- α+4α = 1 n ∑ |W +4W| (3) where |.| is the absolute function. Starting from a common absolute quantized value (α), the weights sharing the same α update independently (∀i, j wi, wj ∈ W, ∂wi/∂wj = 0). With large QSS, as the weights diverge from α, the update for α becomes inefficient and noisy. Although phase1 results in optimized B, phase1 does not completely optimize α (within the limited number of iterations). Need for improved convergence of α forms the motivation for phase2. 3.2 PHASE2: α TRAINING Phase2 starts by converting the full precision trained model from phase1 to the corresponding binary model. The full precision weights W in phase1 are replaced with the corresponding binary version, α and B in the model. B is fixed and only α is trained. Phase2 only constructs the binary model 2All the figures are presented with 160k iterations, with learning rate decayed by 0.1 every 40k steps to investigate the effects of 4 different learning rates. and does not construct the full precision model. Phase2 is faster compared to phase1 due to fewer training parameters, use of binary weights, and no quantized-distortion step. Phase2 is performed with a smaller learning rate after the bit-flips do not occur anymore in phase1. Similar to phase1, phase2 also uses the original training procedure but with fewer number of trainable parameters. In the complete training procedure, we first perform phase1 followed by phase2. The trained binary model at the end of phase2 represents the output of the complete training procedure. This complete training procedure uses the same number of iterations as that in the original training procedure. As a result, the total training time combining phase1 and phase2 is equivalent to the original full precision training time. 3.3 SPECIAL CARE FOR CNNS This section compares different ways to apply quantization on a 4D tensor kernel. Quantization for a 2D weight matrix W with n elements outputs a binary 2D matrix B ∈ {−1,+1}n and shared scaling factor α per row of W . All the weights in a row share the same α, where α ∈ α. Further, a row in the matrix can be split into t sub-rows (referred as tables), where each table has a different α. For quantizing a 4D tensor kernel W ∈ Rk×c×f×f in a convolution layer, W is reshaped to 2D matrix W ∈ Rk×cff (Rastegari et al., 2016) (k is the number of output features, c is the number of inputs features and f is the filter size). There are total k α, each shared by c× f × f number of weights. Each output feature in the convolution layer has c × f × f weights. Thus, there is only 1 α shared by all the weights for an output feature. As the quantized weights can only take the value of {+α,−α}, all the inputs for an output feature can only be weighted by the same absolute factor α. Hence, the representative power of the quantized network is limited. To alleviate this problem, we convert the 4D tensor W to 2D matrix differently. W is transposed to f × k × c× f and then reshaped to 2D matrix Rf×kcf (referred as skewed matrix). Next, each row of size k × c × f is split into k/f sub-rows. Each sub-row has a different α. Total number of α remains the same as above (k). Furthermore, the inputs for an output feature can now be weighted by f number of unique α. Note that some α will be shared among different output features as well. In our experiments, skewed matrix shows better results. Section 5.1 shows the benefit in accuracy using the skewed matrix for quantization. 4 K-MEANS LOSS AND SHUFFLING This section aims to limit the divergence of weights from each other during training to get more accurate estimation of α. L2 regularization (in the form of L2 loss) is frequently used in training of neural networks. L2 loss prevents the weights from exploding and suppresses the magnitude of the weights. However, L2 loss does apply any restriction on the variance of the weights. As α is obtained using Equation 1, higher variance in weights results in higher quantization error. We aim to reduce quantization error by introducing LKM loss function to reduce the variance of the weights. Let Wi represent all the weights in a layer i. Letw ∈Wi be a subset of weights sharing common α (w are referred as clusters from now). The new loss is represented as: LKM = c ‖W‖0 ∑ ∀w∈W ‖w − avg(|w|)‖2 (4) where c is a constant, avg(.) computes the average of the inputs. LKM divides the weights W into different clusters w with common α and limits the divergence of weights from the cluster average of its absolute values α. LKM restricts the independent movement of weights. Similar with L2 loss, a diverged weight increases the LKM . In addition, a diverged weight also shifts the cluster average, increasing the LKM loss further. Thus LKM encourages a lower variance in the weights with common α and improve quantization as a result. The constant factor c is set to be the same as weight decay rate (the constant for L2 loss is reduced to mitigate the effect of L2). Shuffling. LKM can be extended for multiple tables, where a row of a quantized matrix has multiple shared α. With multiple tables, let w correspond to a subset of row, where the subset shares the common α. LKM helps in better approximation for α by forcing a predetermined group of weights to exhibit low variance. We could also achieve a better approximation for α by re-arranging the weights so that similar values are grouped together. Note that rearranging is applicable for DNNs with multiple tables only. Let the weight matrix between two fully connected layers li and li−1 be represented byW i,i−1. Two nodes in a layer li given as lij and l i k can be swapped by switching the rows j, k of weight matrix W i,i−1 and the columns j, k of W i+1,i. The layout of the weight matrix can be set to cluster the desired weights without impacting the output of the network. Swapping the nodes in layer li, li−1 swaps the rows and columns of W i,i−1 respectively. Thus, nodes in each layer can be swapped independently. K-means clustering is used to find an optimized configuration of weights in terms of grouping similar values together first. And then the nodes in the layer are swapped to enforce such configuration, reducing the quantization error. The methodology of finding and applying the optimal swapping configuration is termed as shuffling of nodes. Because applying shuffling of nodes to the current layer requires a layer before and after the current layer, we introduce a shuffle layer in the start and end of the network to allow shuffling in first and last layer of the network. The shuffle layer stores the shuffle configuration and behaves as a mapping layer. The overhead of the shuffle layer is less than 1% of the model size (same as the size of bias in a layer). Shuffling is applied in the weight distortion step in phase1, when quantizing the weights. 5 EXPERIMENTS Experiments are performed with CNNs and RNNs to show the effectiveness of the proposed method. All the experiments are performed using Tensorflow (Abadi et al., 2016) using 2 Titan X GPUs. Full precision models for CNNs are obtained from tensorflow models repository3, while RNN models are obtained from Verwimp et al. (2017)4. We quantize all the layers of the network unless specified otherwise. Iterative quantization training is performed on pre-trained full precision models. Greedy quantization using Equation 1 is used for all the experiments. Quantization Step Size is set to 500 constant throughout all the experiments. 5.1 CIFAR ResNet32 is trained on CIFAR-10 (Krizhevsky, 2009) for 90k iterations, where the first 60k iterations are performed using step training and remaining 30k iterations are performed with α training. 60% pruning rate is set for ternary quantization. Training ResNet32 using our proposed quantization training method does not incur any increase in training time. Table 1 shows the improvement by combining the techniques discussed in previous sections in an incremental manner for ResNet32. We use the k α for quantization following Rastegari et al. (2016) as default quantization mode (k is the number of output features for a convolution layer). Different QSS schedules were tried where QSS starts with a high value and is reduced over the training procedure. QSS schedule produces better accuracy compared to fixed QSS for step training. Note that, however, α training eliminates the need to fine-tune over the QSS and achieves the same accuracy. Number of tables for skewed mode have been set to have the same model size as default mode (resulting in the same number of α). Table 2 provides our final accuracy for CIFAR dataset with all the techniques combined using ResNet32 and WideResNet 28-10 models (70x bigger model size compared to ResNet32). Our model provides similar performance compared to TTQ (Zhu et al., 2017) using ResNet32. Our method achieves full precision accuracy for WideResNet 28-10 on both CIFAR-10 and CIFAR-100, compared to the results by McDonnell (2018) without changing the training procedure. We believe that WideResNet demonstrates smaller quantization error than ResNet32 because Li et al. (2018) reported that wider networks facilitates flatter minima. 5.2 WIKITEXT-2 LSTM model with 1 layer consisting of 512 nodes is used for WikiText-2 (Merity et al., 2017) dataset, with same hyper-parameter settings as followed by Xu et al. (2018). Performance is mea- 3https://github.com/tensorflow/models/ 4https://github.com/lverwimp/tf-lm sured with Perplexity Per Word metric (PPW). Full precision PPW is 100.2. Activation quantization requires quantization to be performed every iteration for inference, wiping out the speed up obtained with quantized weights for inference. Activation quantization slows down training as well. Thus, we use 32bit activations while 3bit activations used by Xu et al. (2018). Our 2-bit alternating quantization (greedy quantization replaced with alternating quantization in our proposed method) reaches full precision PPW (Table 3). Table 3 compares the accuracy for multi-table models (multiple α per row of the matrix to be quantized) with TWN model (TWN method from Li et al. (2016) combined with our training method). Our multi-table model (8 tables for Embedding and Softmax layer, 16 tables for LSTM layer) combined with LKM loss function generates PPW equivalent to TWN PPW. Multi-table model accounts to 1.5 bits per weight in total (after accumulating all the α and B). Applying LKM reduces the model size by 25% from TWN (2 bit) to 1.5 bits with equivalent PPW. We also perform 1 bit quantization reaching 128.18 PPW. Hybrid LSTM. In the WikiText-2 LSTM model, Embedding and Softmax layer each forms over 45% of the full precision model size. Therefore, we selectively optimize the number of quantization bits for each layer to achieve higher compression rate. 1 bit quantization was found to be sufficient for Embedding layer. However, other layers required more number of bits. We fix 1bit quantization for Embedding layer (1 table per row), TWN for Softmax layer and vary the number of quantization bits for LSTM layer. Using 2bit for LSTM layer (1.53 bits per weight in total) provides PPW better than our TWN greedy model, with 25% smaller model size. 5.3 ABLATION STUDY Random Initialization. The distribution of bit-flips and convergence of accuracy for step training with randomly initialized model and with pre-trained model is observed to be similar (Figure 2). The accuracy gap between Binary Step Training model with pre-trained model and BST without pre-trained model less than 0.1% (Table 1). One-Step Quantization. We experiment the degradation in accuracy with just one-step quantization (no retraining). ResNet32 with full precision accuracy of 92.47% on CIFAR-10 produces 44.33% accuracy. To examine the potential of LKM , ResNet32 is again trained with random initialization in full precision mode with LKM (without any form of quantization). Although, the full precision accuracy drops to 91.8%, one-step quantization accuracy goes up to 76.32%. Increasing the regularization constant for LKM yields the one-step quantization accuracy as 84.51%. Robustness. Table 4 shows the robustness of iterative quantization with varying QSS over 2x in the order of magnitudes. As explained in section 3.1, varying learning rate can provide the same functionality as varying QSS. As most of the modern neural networks use special learning rate policy (such as exponential decay, step-wise decay), the training procedure is overall robust to the choice of QSS. The simplicity of the algorithm and robustness to the added hyper-parameter facilitate quick adoption of our proposed technique. 5.4 IMAGENET Full precision ResNet18 is trained on ImageNet (Russakovsky et al., 2015) following the base repository3 with batch size of 256. Our binary model reaches 60.6% Top1 accuracy compared to full precision accuracy of 69.6% (Table 5). Our model shows comparable accuracy compared to existing quantization methods. We believe our model can reach higher accuracy by using layer-by-layer quantization as done by Zhou et al. (2018). 5.5 QUANTIZATION OVERHEAD Let training time per iteration be defined as the combined time to perform forward-propagation and back-propagation on a batch of data. We evaluate the overhead of performing a quantization step once relative to the defined training time per iteration. All the timings are averaged over 1000 iterations, averaged over ResNet32 and WideResNet. We observed that overhead of using greedy quantization is the lowest (8% and 12% of the training time for 1bit and 2bit quantization). More sophisticated quantization methods using regression or iterative procedures, namely refined and alternating quantization, have overhead of 5x and 40x respectively over the training time. Table 3 compares the benefit of using these quantization methods, where alternating quantization shows the best performance despite the biggest overhead. As our training method, unlike existing methods, performs quantization once every 500 iterations, the overhead of the quantization is reduced by 500x. As a result, the overhead of the most expensive quantization remains to be 10% of the training time. 6 CONCLUSION In this work, we have presented an iterative quantization technique performing quantization once every few steps, combined with binary model α training. Step training explores flatter minima while escaping sharp minima and α training performs exploitation of the chosen minima. We also presented a loss function LKM which allows weights to be adjusted for improved quantization. We demonstrated full precision accuracy recovery with CIFAR and WikiText-2 dataset with our quantized models. We also presented a hybrid model with 1.5 bits performing better than the our TWN model. B TRAINING DETAILS We provide more details on the training procedure for the networks for all the datasets. B.1 CIFAR CIFAR dataset consists of 50000 training images with 10000 test images. Each image is of size 32x32. CIFAR-10 dataset classifies the corpus of images into 10 disjoint classes. CIFAR-100 classifies the image set into 100 fine-grained disjoint classes. ResNet. ResNet32 and WideResNet 28-10 were both trained for 90k iterations with batch size of 128. Step-wise decay learning schedule was used. With initial learning rate of 0.1, learning rate was decayed with 0.1 at 40k, 60k and 80k iterations each. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Training was pre-processed with random cropping and random horizontal flipping. Evaluation data was pre-processed with a single central crop only. Quantization Step Size was set as 500 during step training. Pruning. ResNet32 was pruned with 60% as the final sparsity, with an initial sparsity of 20%. Pruning was started with a pre-trained model. Pruning was gradually increased at en exponential rate (exponential factor of 3) with the pruning being performed every 100 iterations. Re-training for pruning for performed for 40k iterations. B.2 WIKITEXT-2 WikiText-2 contains 2088k training, 217k validation and 245k test tokens, with a vocabulary of 33k words. The model for Wikitext-2 consisted of 1 LSTM layer with 512 units. Initial learning rate was set as 20. Learning rate was decayed by 1.2 every 2 epochs. Training was terminated once the learning rate was less than 0.001 or maximum of 80 epochs was reached. The absolute gradient norm was set at 0.25. The network was unrolled for 30 time steps. Training was performed with a dropout ratio of 0.5. Weights were clipped to an absolute maximum of 1.0. Quantization Step Size was set as 500 during step training. Divergence with Greedy Quantization. Using greedy quantization with 2bit quantization for LSTM model always diverged the training with WikiText-2 dataset. To make the model converge up to some extent, we used 1bit quantized model as an initialization for 2 bit quantization. Although, 1bit initialized model converges for a few epochs but also diverges after 10-15 epochs. The results reported in Table 3 for 2 bit greedy quantization follow initializing with 1bit quantized model. The divergence in network with greedy quantization is the reason for using TWN for Softmax layer (and not 2bit quantization) in our hybrid model. B.3 IMAGENET ImageNet consists of 1281176 training images with 50000 validation images, classified into 1000 classes. ResNet18 network was used for training. Training data was pre-processed with random cropping and random horizontal flipping. However, validation data was pre-processed with 1 single central crop. Step-wise decay learning rate schedule was followed with initial learning rate of 0.1 and decayed at epochs 30, 60, 80 and 90 by a factor of 0.1. The complete training procedure was performed for 100 epochs with a batch size of 256. Momentum training optimizer was used for training with momentum set 0.9. 0.0005 was set as weight decay rate. Quantization Step Size was set as 500 during step training. B.4 BATCH NORMALIZATION Batch normalization (Ioffe & Szegedy, 2015) parameters (µ and σ) are updated using moving average. Consequently, the effect of quantized-distortion performed even 100 iterations earlier would have less than 1% effect on the BN parameters (with momentum=0.99). As a result, BN parameters are not suited well for quantized model and results in drop in evaluation accuracy for the quantized model. To avoid the drop in evaluation accuracy by BN, the BN parameters are re-evaluated over 1 train epoch (keeping the other parameters fixed) before performing evaluation for the phase1. Phase2 does not require any special care for batch normalization as there is no distortion step.
1. What is the focus of the paper regarding neural network quantization? 2. What are the strengths and weaknesses of the proposed approach compared to other works in the field? 3. How does the reviewer assess the clarity and completeness of the paper's content? 4. Are there any concerns or questions regarding the experimental results and comparisons? 5. What are the limitations of the paper, particularly in terms of its contributions and claims?
Review
Review This paper proposes a method based on re-training the full-precision model and then optimizing the corresponding binary model. It consists of two phases: (1) the full-precision model training where the quantization step is introduced through QSS to train the network, and (2) fine tuning of quantized networks, where the trained network was converted into a binary model. In addition, using the skewed matrix for quantization improves the accuracy. Then a loss function based on the k means form is used to normalize the weight for reducing the quantization error. Quantization experiments for CNNs or LSTMs have been conducted on CIFAR10, CIFAR100, IMAGENET, and WikiText-2 dataset. This paper has been presented clearly. However, it can be improved by introducing the motivation of the tricks(e.g. skewed matrix and loss related to k-means ) used for quantization. In the experiments, the precision improvement on the CIFAR and ImageNet dataset performs worse than some competitors. For example, the precision of the proposed method was significantly worse than Zhou et al, 2018 on ImageNet. It is better to analyze the reason. In addition, as claimed from the introduction, the contribution of this paper was to reduce the overhead of expensive quantization. However, no experimental results on computation time and parameter size have been shown.
ICLR
Title Predicting the Generalization Gap in Deep Networks with Margin Distributions Abstract Recent research has demonstrated that deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap can be predicted from training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization. 1 INTRODUCTION Generalization, the ability of a classifier to perform well on unseen examples, is a desideratum for progress towards real-world deployment of deep neural networks in domains such as autonomous cars and healthcare. Until recently, it was commonly believed that deep networks generalize well to unseen examples. This was based on empirical evidence about performance on held-out dataset. However, new research has started to question this assumption. Adversarial examples cause networks to misclassify even slightly perturbed images at very high rates (Goodfellow et al., 2014; Papernot et al., 2016). In addition, deep networks can overfit to arbitrarily corrupted data (Zhang et al., 2016), and they are sensitive to small geometric transformations (Azulay & Weiss, 2018; Engstrom et al., 2017). These results have led to the important question about how the generalization gap (difference between train and test accuracy) of a deep network can be predicted using the training data and network parameters. Since in all of the above cases, the training loss is usually very small, it is clear that existing losses such as cross-entropy cannot serve that purpose. It has also been shown (e.g. in Zhang et al. (2016)) that regularizers such as weight decay cannot solve this problem either. Consequently, a number of recent works (Neyshabur et al., 2017b; Kawaguchi et al., 2017; Bartlett et al., 2017; Poggio et al., 2017; Arora et al., 2018) have started to address this question, proposing generalization bounds based on analyses of network complexity or noise stability properties. However, a thorough empirical assessment of these bounds in terms of how accurately they can predict the generalization gap across various practical settings is not yet available. In this work, we propose a new quantity for predicting generalization gap of a feedforward neural network. Using the notion of margin in support vector machines (Vapnik, 1995) and extension to deep networks (Elsayed et al., 2018), we develop a measure that shows a strong correlation with generalization gap and significantly outperforms recently developed theoretical bounds on ∗Work done as part of the Google AI Residency. Data and relevant code are at https://github.com/google-research/google-research/tree/master/demogen generalization2. This is empirically shown by studying a wide range of deep networks trained on the CIFAR-10 and CIFAR-100 datasets. The measure presented in this paper may be useful for a constructing new loss functions with better generalization. Besides improvement in the prediction of the generalization gap, our work is distinct from recently developed bounds and margin definitions in a number of ways: 1. These recently developed bounds are typically functions of weight norms (such as the spectral, Frobenius or various mixed norms). Consequently, they cannot capture variations in network topology that are not reflected in the weight norms, e.g. adding residual connections (He et al., 2016) without careful additional engineering based on the topology changes. Furthermore, some of the bounds require specific treatment for nonlinear activations. Our proposed measure can handle any feedforward deep network. 2. Although some of these bounds involve margin, the margin is only defined and measured at the output layer (Bartlett et al., 2017; Neyshabur et al., 2017b). For a deep network, however, margin can be defined at any layer (Elsayed et al., 2018). We show that measuring margin at a single layer does not suffice to capture generalization gap. We argue that it is crucial to use margin information across layers and show that this significantly improves generalization gap prediction. 3. The common definition of margin, as used in the recent bounds e.g. Neyshabur et al. (2017b), or as extended to deep networks, is based on the closest distance of the training points to the decision boundary. However, this notion is brittle and sensitive to outliers. In contrast, we adopt margin distribution (Garg et al., 2002; Langford & Shawe-Taylor, 2002; Zhang & Zhou, 2017; 2018) by looking at the entire distribution of distances. This is shown to have far better prediction power. 4. We argue that the direct extension of margin definition to deep networks (Elsayed et al., 2018), although allowing margin to be defined on all layers of the model, is unable to capture 2In fairness, the theoretical bounds we compare against were designed to be provable upper bounds rather than estimates with low expected error. Nevertheless, since recent developments on characterizing the generalization gap of deep networks are in form of upper bounds, they form a reasonable baseline. generalization gap without proper normalization. We propose a simple normalization scheme that significantly boosts prediction accuracy. 2 RELATED WORK The recent seminal work of Zhang et al. (2016) has brought into focus the question of how generalization can be measured from training data. They showed that deep networks can easily learn to fit randomly labeled data with extremely high accuracy, but with arbitrarily low generalization capability. This overfitting is not countered by deploying commonly used regularizers. The work of Bartlett et al. (2017) proposes a measure based on the ratio of two quantities: the margin distribution measured at the output layer of the network; and a spectral complexity measure related to the network’s Lipschitz constant. Their normalized margin distribution provides a strong indication of the complexity of the learning task, e.g. the distribution is skewed towards the origin (lower normalized margin) for training with random labels. Neyshabur et al. (2017b;a) also develop bounds based on the product of norms of the weights across layers. Arora et al. (2018) develop bounds based on noise stability properties of networks: more stability implies better generalization. Using these criteria, they are able to derive stronger generalization bounds than previous works. The margin distribution (specifically, boosting of margins across the training set) has been shown to correspond to generalization properties in the literature on linear models (Schapire et al., 1998): they used this connection to explain the effectiveness of boosting and bagging techniques. Reyzin & Schapire (2006) showed that it was important to control the complexity of a classifier when measuring margin, which calls for some type of normalization. In the linear case (SVM), margin is naturally defined as a function of norm of the weights Vapnik (1995). In the case of deep networks, true margin is intractable. Recent work (Elsayed et al., 2018) proposed a linearization to approximate the margin, and defined the margin at any layer of the network. (Sokolic et al., 2016) provide another approximation to the margin based on the norm of the Jacobian with respect to the input layer. They show that maximizing their approximations to the margin leads to improved generalization. However, their analysis was restricted to margin at the input layer. Poggio et al. (2017) and Liao et al. (2018) propose a normalized cross-entropy measure that correlates well with test loss. Their proposed normalized loss trades off confidence of predictions with stability, which leads to better correlation with test accuracy, leading to a significant lowering of output margin. 3 PREDICTION OF GENERALIZATION GAP In this section, we introduce our margin-based measure. We first explain the construction scheme for obtaining the margin distribution. We then squeeze the distributional information of the margin to a small number of statistics. Finally, we regress these statistics to the value of the generalization gap. We assess prediction quality by applying the learned regression coefficients to predict the generalization gap of unseen models. We will start with providing a motivation for using the margins at the hidden layers which is supported by our empirical findings. SVM owes a large part of its success to the kernel that allows for inner product in a higher and richer feature space. At its crux, the primal kernel SVM problem is separated into the feature extractor and the classifier on the extracted features. We can separate any feed forward network at any given hidden layer and treat the hidden representation as a feature map. From this view, the layers that precede this hidden layer can be treated as a learned feature extractor and then the layers that come after are naturally the classifier. If the margins at the input layers or the output layers play important roles in generalization of the classifier, it is a natural conjecture that the margins at these hidden representations are also important in generalization. In fact, if we ignore the optimization procedure and focus on a converged network, generalization theories developed on the input such as Lv et al. (2019) can be easily extended to the hidden layers or the extracted features. 3.1 MARGIN APPROXIMATION First, we establish some notation. Consider a classification problem with n classes. We assume a classifier f consists of non-linear functions fi : X → R, for i = 1, . . . , n that generate a prediction score for classifying the input vector x ∈ X to class i. The predicted label is decided by the class with maximal score, i.e. i∗ = arg maxi fi(x). Define the decision boundary for each class pair (i, j) as: D(i,j) , {x | fi(x) = fj(x)} (1) Under this definition, the lp distance of a point x to the decision boundary D(i,j) can be expressed as the smallest displacement of the point that results in a score tie: df,x,(i,j) , min δ ‖δ‖p s.t. fi(x+ δ) = fj(x+ δ) (2) Unlike an SVM, computing the “exact” distance of a point to the decision boundary (Eq. 2) for a deep network is intractable3. In this work, we adopt the approximation scheme from Elsayed et al. (2018) to capture the distance of a point to the decision boundary. This a first-order Taylor approximation to the true distance Eq. 2. Formally, given an input x to a network, denote its representation at the lth layer (the layer activation vector) by xl. For the input layer, let l = 0 and thus x0 = x. Then for p = 2, the distance of the representation vector xl to the decision boundary for class pair (i, j) is given by the following approximation: df,(i,j)(x l) = fi(x l)− fj(xl) ‖∇xlfi(xl)−∇xlfj(xl)‖2 (3) Here fi(xl) represents the output (logit) of the network logit i given xl. Note that this distance can be positive or negative, denoting whether the training sample is on the “correct” or “wrong” side of the decision boundary respectively. This distance is well defined for all (i, j) pairs, but in this work we assume that i always refers to the ground truth label and j refers to the second highest or highest class (if the point is misclassified). The training data x induces a distribution of distances at each layer l which, following earlier naming convention (Garg et al., 2002; Langford & Shawe-Taylor, 2002), we refer to as margin distribution (at layer l). For margin distribution, we only consider distances with positive sign (we ignore all misclassified training points). Such design choice facilitates our empirical analysis when we transform our features (e.g. log transform); further, it has also been suggested that it may be possible to obtain a better generalization bound by only considering the correct examples when the classifier classifies a significant proportion of the training examples correctly, which is usually the case for neural networks (Bartlett, 1998). For completeness, the results with negative margins are included in appendix Sec. 7. A problem with plain distances and their associated distribution is that they can be trivially boosted without any significant change in the way classifier separates the classes. For example, consider multiplying weights at a layer by a constant and dividing weights in the following layer by the same constant. In a ReLU network, due to positive homogeneity property (Liao et al., 2018), this operation does not affect how the network classifies a point, but it changes the distances to the decision boundary4. To offset the scaling effect, we normalize the margin distribution. Consider margin distribution at some layer l, and let xlk be the representation vector for training sample k. We compute the variance of each coordinate of {xlk} separately, and then sum these individual variances. This quantity is called total variation of xl. The square root of this quantity relates to the scale of the distribution. That is, if xl is scaled by a factor, so is the square root of the total variation. Thus, by dividing distances by the square root of total variation, we can construct a margin distribution invariant to scaling. More concretely, the total variation is computed as: ν(xl) = tr ( 1 n n∑ k=1 (xlk − x̄l)(xlk − x̄l)T ) , x̄l = 1 n n∑ k=1 xlk , (4) i.e. the trace of the empirical covariance matrix of activations. Using the total variation, the normalized margin is specified by: d̂f,(i,j)(x l k) = df,(i,j)(x l k)√ ν(xl) (5) While the quantity is relatively primitive and easy to compute, Fig. 1 (top) shows that the normalizedmargin distributions based on Eq. 5 have the desirable effect of becoming heavier tailed and shifting 3This is because computing the distance of a point to a nonlinear surface is intractable. This is different from SVM where the surface is linear and distance of a point to a hyperplane admits a closed form expression. 4For example, suppose the constant c is greater that one. Then, multiplying the weights of a layer by c magnifies distances computed at the layer by a factor of c. to the right (increasing margin) as generalization gap decreases. We find that this effect holds across a range of networks trained with different hyper-parameters. 3.2 SUMMARIZING THE MARGIN DISTRIBUTION Instead of working directly with the (normalized) margin distribution, it is easier to analyze a compact signature of that. The moments of a distribution are a natural criterion for this purpose. Perhaps the most standard way of doing this is computing the empirical moments from the samples and then take the nth root of the nth moment. In our experiments, we used the first five moments. However, it is a well-known phenomenon that the estimation of higher order moments based on samples can be unreliable. Therefore, we also consider an alternate way to construct the distribution’s signature. Given a set of distances D = {d̂m}nm=1, which constitute the margin distribution. We use the median Q2, first quartile Q1 and third quartile Q3 of the normalized margin distribution, along with the two fences that indicate variability outside the upper and lower quartiles. There are many variations for fences, but in this work, with IQR = Q3 −Q1, we define the upper fence to be max({d̂m : d̂m ∈ D ∧ d̂m ≤ Q3 + 1.5IQR}) and the lower fence to be min({d̂m : d̂m ∈ D ∧ d̂m ≥ Q1 − 1.5IQR}) (McGill et al., 1978). These 5 statistics form the quartile description that summarizes the normalized margin distribution at a specific layer, as shown in the box plots of Fig. 1. We will later see that both signature representations are able to predict the generalization gap, with the second signature working slightly better. A number of prior works such as Bartlett et al. (2017), Neyshabur et al. (2017b), Liu et al. (2016), Sun et al. (2015), Sokolic et al. (2016), and Liang et al. (2017) have focused on analyzing or maximizing the margin at either the input or the output layer of a deep network. Since a deep network has many hidden layers with evolving representations, it is not immediately clear which of the layer margins is of importance for improving generalization. Our experiments reveal that margin distribution from all of the layers of the network contribute to prediction of generalization gap. This is also clear from Fig. 1 (top): comparing the input layer (layer 0) margin distributions between the left and right plots, the input layer distribution shifts slightly left, but the other layer distributions shift the other way. For example, if we use quartile signature, we have 5L components in this vector, where L is the total number of layers in the network. We incorporate dependence on all layers simply by concatenating margin signatures of all layers into a single combined vector θ that we refer to as total signature. Empirically, we found constructing the total signature based on four evenly-spaced layers (input, and 3 hidden layers) sufficiently captures the variation in the distributions and generalization gap, and also makes the signature agnostic to the depth of the network. 3.3 EVALUATION METRICS Our goal is to predict the generalization gap, i.e. the difference between training and test accuracy at the end of training, based on total signature θ of a trained model. We use the simplest prediction model, i.e. a linear form ĝ = aTφ(θ) + b, where a ∈ Rdim(θ) and b ∈ R are parameters of the predictor, and φ : R→ R is a function applied element-wise to θ. Specifically, we will explore two choices of φ: the identity φ(x) = x and entry-wise log transform φ(x) = log(x), which correspond to additive and multiplicative combination of margin statistics respectively. We do not claim this model is the true relation, but rather it is a simple model for prediction; and our results suggest that it is a surprisingly good approximation. In order to estimate predictor parameters a, b, we generate a pool of n pretrained models (covering different datasets, architectures, regularization schemes, etc. as explained in Sec. 4) each of which gives one instance of the pair θ, g (g being the generalization gap for that model). We then find a, b by minimizing mean squared error: (a∗, b∗) = arg mina,b ∑ i (a Tφ(θi) + b− gi)2 , where i indexes the ith model in the pool. The next step is to assess the prediction quality. We consider two metrics for this. The first metric examines quality of predictions on unseen models. For that, we consider a held-out pool of m models, different from those used to estimate (a, b), and compute the value of ĝ on them via ĝ = aTφ(θ) + b. In order to quantify the discrepancy between predicted gap ĝ and ground truth gap g we use the notion of coefficient of determination (R2) (Glantz et al., 1990): R2 = 1− ∑n j=1(ĝj − gj)2∑n j=1(gj − 1 n ∑n j=1 gj) 2 (6) R2 measures what fraction of data variance can be explained by the linear model5 (it ranges from 0 to 1 on training points but can be outside that range on unseen points). To be precise, we use k-fold validation to study how the predictor can perform on held out pool of trained deep networks. We use 90/10 split, fit the linear model with the training pool, and measure R2 on the held out pool. The performance is averaged over the 10 splits. Since R2 is now not measured on the training pool, it does not suffer from high data dimension and can be negative. In all of our experiments, we use k = 10. We provide a subset of residual plots and corresponding univariate F-Test for the experiments in the appendix (Sec. 8). The F-score also indicates how important each individual variable is. The second metric examines how well the model fits based on the provided training pool; it does not require a test pool. To characterize this, we use adjusted R̄2 (Glantz et al., 1990) defined as: R̄2 = 1− (1−R2) n− 1 n− dim(θ)− 1 . (7) The R̄2 can be negative when the data is non-linear. Note that R̄2 is always smaller than R2. Intuitively, R̄2 penalizes the model if the number of features is high relative to the available data points. The closer R̄2 is to 1, the better the model fits. Using R̄2 is a simple yet effective method to test the fitness of linear model and is independent of the scale of the target, making it a more illustrative metric than residuals. 4 EXPERIMENTS We tested our measure of generalization gap ĝ, along with baseline measures, on a number of deep networks and architectures: nine-layer convolutional networks on CIFAR-10 (10 with input layer), and 32-layer residual networks on both CIFAR-10 and CIFAR-100 datasets. The trained models and relevant Tensorflow Abadi et al. (2016) code to compute margin distributions are released at https://github.com/google-research/google-research/tree/master/demogen 4.1 CONVOLUTIONAL NEURAL NETWORKS ON CIFAR-10 Using the CIFAR-10 dataset, we train 216 nine-layer convolutional networks with different settings of hyperparameters and training techniques. We apply weight decay and dropout with different strengths; we use networks with and without batch norm and data augmentation; we change the number of hidden units in the hidden layers. Finally, we also include training with and without corrupted labels, as introduced in Zhang et al. (2016); we use a fixed amount of 20% corruption of the true labels. The accuracy on the test set ranges from 60% to 90.5% and the generalization gap ranges from 1% to 35%. In standard settings, creating neural network models with small generalization gap is difficult; in order to create sufficiently diverse generalization behaviors, we limit some models’ capacities by large weight regularization which decreases generalization gap by lowering the training accuracy. All networks are trained by SGD with momentum. Further details are provided in the supplementary material (Sec. 6). For each trained network, we compute the depth-agnostic signature of the normalized margin distribution (see Sec. 3). This results in a 20-dimensional signature vector. We estimate the parameters of the linear predictor (a, b) with the log transform φ(x) = log(x) and using the 20-dimensional signature vector θ. Fig. 2 (left) shows the resulting scatter plot of the predicted generalization gap ĝ and the true generalization gap g. As it can be seen, it is very close to being linear across the range of generalization gaps, and this is also supported by the R̄2 of the model, which is 0.96 (max is 1). As a first baseline method, we compare against the work of Bartlett et al. (2017) which provides one of the best generalization bounds currently known for deep networks. This work also constructs a margin distribution for the network, but in a different way. To make a fair comparison, we extract the same signature θ from their margin distribution. Since their margin distribution can only be defined for the output layer, their θ is 5-dimensional for any network. The resulting fit is shown in Fig. 2(right). It is clearly a poorer fit than that of our signature, with a significantly lower R̄2 of 0.72. For a fairer comparison, we also reduced our signature θ from 20 dimensions to the best performing 4 dimensions (even one dimension less than what we used for Bartlett’s) by dropping 16 components in our θ. This is shown in Fig. 2 (middle) and has a R̄2 of 0.89, which is poorer than our complete 5 A simple manipulation shows that the prediction residual ∑m j=1(ĝj − gj) 2 ∝ 1 − R2, so R2 can be interpreted as a scale invariant alternative to the residual. θ but still significantly higher than that of Bartlett et al. (2017). In addition, we considered two other baseline comparisons: Sokolic et al. (2016), where margin at input is defined as a function of the Jacobian of output (logits) with respect to input; and Elsayed et al. (2018) where the linearized approximation to margin is derived (for the same layers where we use our normalized margin approximation). To quantify the effect of the normalization, different layers, feature transformation etc., we conduct a number of ablation experiments with the following configuration: 1. linear/log: Use signature transform of φ(x) = x or φ(x) = log(x); 2. sl: Use signature from the single best layer (θ ∈ R5); 3. sf: Use only the single best statistic from the total signature for all the layers (θ ∈ R4, individual layer result can be found in Sec. 7); 4. moment: Use the first 5 moments of the normalized margin distribution as signature instead of quartile statistics θ ∈ R20 (Sec. 3); 5. spectral: Use signature of spectrally normalized margins from Bartlett et al. (2017) (θ ∈ R5); 6. qrt: Use all the quartile statistics as total signature θ ∈ R20 (Sec. 3); 7. best4: Use the 4 best statistics from the total signature (θ ∈ R4); 8. Jacobian: Use the Jacobian-based margin defined in Eq (39) of Sokolic et al. (2016) (θ ∈ R5); 9. LM: Use the large margin loss from Elsayed et al. (2018) at the same four layers where the statistics are measured (θ ∈ R4); 10. unnorm indicates no normalization. In Table 1, we list the R̄2 from fitting models based on each of these scenarios. We see that, both quartile and moment signatures perform similarly, lending support to our thesis that the margin distribution, rather than the smallest or largest margin, is of importance in the context of generalization. 4.2 RESIDUAL NETWORKS ON CIFAR-10 On the CIFAR-10 dataset, we train 216 convolutional networks with residual connections; these networks are 32 layers deep with standard ResNet 32 topology (He et al., 2016). Since it is difficult to train ResNet without activation normalization, we created generalization gap variation with batch normalization (Ioffe & Szegedy, 2015) and group normalization (Wu & He, 2018). We further use different initial learning rates. The range of accuracy on the test set ranges from 83% to 93.5% and generalization gap from 6% to 13.5%. The residual networks were much deeper, and so we only chose 4 layers for feature-length compatibility with the shallower convoluational networks. This design choice also facilitates ease of analysis and circumvents the dependency on depth of the models. Table 1 shows the R̄2. Note in the presence of residual connections that use convolution instead of identity and identity blocks that span more than one convolutional layers, it is not immediately clear how to properly apply the bounds of Bartlett et al. (2017) (third from last row) without morphing the topology of the architecture and careful design of reference matrices. As such, we omit them for ResNet. Fig. 3 (left) shows the fit for the resnet models, with R̄2 = 0.87. Fig. 3 (middle) and Fig. 3 (right) compare the log normalized density plots of a CIFAR-10 resnet and CIFAR-10 CNN. The plots show that the Resnet achieves a better margin distribution, correlated with greater test accuracy, even though it was trained without data augmentation. 4.3 RESNET ON CIFAR-100 On the CIFAR-100 dataset, we trained 324 ResNet-32 with the same variation in hyperparameter settings as for the networks for CIFAR-10 with one additional initial learning rate. The range of accuracy on the test set ranges from 12% to 73% and the generalization gap ranges from 1% to 75%. Table 1 shows R̄2 for a number of ablation experiments and the full feature set. Fig. 4 (left) shows the fit of predicted and true generalization gaps over the networks (R̄2 = 0.97). Fig. 4 (middle) and Fig. 4 (right) compare a CIFAR-100 residual network and a CIFAR-10 residual network with the same architecture and hyperparameters. Under these settings, the CIFAR-100 network achieves 44% test accuracy, whereas CIFAR-10 achieves 61%. The resulting normalized margin density plots clearly reflect the better generalization achieved by CIFAR-10: the densities at all layers are wider and shifted to the right. Thus, the normalized margin distributions reflect the relative “difficulty” of a particular dataset for a given architecture. 5 DISCUSSION We have presented a predictor for generalization gap based on margin distribution in deep networks and conducted extensive experiments to assess it. Our results show that our scheme achieves a high adjusted coefficient of determination (a linear regression predicts generalization gap accurately). Specifically, the predictor uses normalized margin distribution across multiple layers of the network. The best predictor uses quartiles of the distribution combined in multiplicative way (additive in log transform). Compared to the strong baseline of spectral complexity normalized output margin (Bartlett et al., 2017), our scheme exhibits much higher predictive power and can be applied to any feedforward network (including ResNets, unlike generalization bounds such as (Bartlett et al., 2017; Neyshabur et al., 2017b; Arora et al., 2018)). We also find that using hidden layers is crucial for the predictive power. Our findings could be a stepping stone for studying new generalization theories and new loss functions with better generalization properties. We include the results on cross architecture and cross data comparison as well as some final thoughts in Appendix Sec. 9. ACKNOWLEDGMENTS We are thankful to Gamaleldin Elsayed (Google), Tomer Koren (Google), Sergey Ioffe (Google), Vighnesh Birodkar (Google), Shraman Ray Chaudhuri (Google), Kevin Regan (Google), Behnam Neyshabur (NYU), and Dylan Foster (Cornell), for discussions and helpful feedbacks. 6 APPENDIX: EXPERIMENTAL DETAILS 6.1 CNN + CIFAR-10 We use an architecture very similar to Network in Network (Lin et al. (2013)), but we remove all dropout and max pool from the network. To create generalization gap in this model, we make the following modification to the base architecture: 1. Use channel size of 192, 288, and 384 to create different width 2. Train with and without batch norm at all convolutional layers 3. Apply dropout at layer 3 and 6 with p = 0.0, 0.2, 0.5 4. Apply l2 regularization with λ = 0.0, 0.001, 0.005 5. Train with and without data augmentation with random cropping, flipping and shifting 6. Train each configuration twice In total this gives us 3× 2× 3× 3× 2× 2 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128 and intial learning rate of 0.01. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch. 6.2 RESNET 32 + CIFAR-10 For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture: 1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018) 3. Train with initial learning rate of 0.01, 0.001 4. Apply l2 regularization with λ = 0.0, 0.02, 0.002 5. Trian with and without data augmentation with random cropping, flipping and shifting 6. Train each configuration 3 times In total this gives us 3× 2× 2× 3× 2× 3 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch. 6.3 RESNET 32 + CIFAR-100 For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture: 1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018) 3. Train with initial learning rate of 0.1, 0.01, 0.001 4. Apply l2 regularization with λ = 0.0, 0.02, 0.002 5. Trian with and without data augmentation with random cropping, flipping and shifting 6. Train each configuration 3 times In total this gives us 3× 2× 3× 3× 2× 3 = 324 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch. 7 APPENDIX: MORE REGRESSION RESULTS 7.1 ANALYSIS WITH NEGATIVE MARGINS The last two rows contain the results of including the negative margins and regress against both the gap (generalization gap) and against acc (test accuracy). We see that including when negative margin is included, it is in general easier to predict the accuracy of the models rather than the gap itself. For convenience, we have reproduced Table 1. 7.2 ANALYSIS FOR INDIVIDUAL LAYER’S MARGIN DISTRIBUTIONS This is a comparison of different individual layer’s predictive power by using only the margin distribution at that layer. This results illustrates the importance of margins in the hidden layers. 8.1 CNN + CIFAR-10 + ALL QUARTILE SIGNATURE 8 APPENDIX: FURTHER ANALYSIS OF REGRESSION 8.2 RESNET 32 + CIFAR-10 + ALL QUARTILE SIGNATURE 8.3 RESNET 32 + CIFAR-100 + ALL QUARTILE SIGNATURE 9 APPENDIX: SOME OBSERVATIONS AND CONJECTURES Everythig here uses the full quartile description. 9.1 CROSS ARCHITECTURE COMPARISON We perform regression analysis with both base CNN and ResNet32 on CIFAR-10. The resulting R̄2 = 0.91 and the k-fold R2 = 0.88. This suggests that the same coefficient works generally well across architectures provided they are trained on the same data. Somehow, the distribution at the 3 locations of the networks are comparable even though the depths are vastly different. 9.2 CROSS DATASET COMPARISON We perform regression analysis with ResNet32 on both CIFAR-10 and CIFAR-100. The resulting R̄2 = 0.96 and the k-fold R2 = 0.95. This suggests that the same coefficient works generally well across dataset of the same architecture. 9.3 CROSS EVERYTHING We join all our experiment data and the resulting The resulting R̄2 = 0.93 and the k-fold R2 = 0.93. It is perhaps surprising that a set of coefficient exists across both datasets and architectures. 9.4 IMPLICATIONS ON GENERALIZATION BOUNDS We believe that the method developed here can be used in complementary with existing generalization bound; more sophisticated engineering of the predictor may be used to actually verify what kind of function the generalization bound should look like up to constant factor or exponents; it may be helpful for developing generalization bound tighter than the existing ones.
1. What is the reviewer's main concern regarding the paper's contribution and novelty? 2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of its interpretation and technical aspects? 3. How does the reviewer assess the clarity and quality of the paper's content, including its writing style and experimental design? 4. What additional questions or concerns does the reviewer have regarding the paper's methodology and conclusions?
Review
Review After author response, I have increased my score. I'm still not 100% sure about the interpretation the authors provided for the negative distances. The paper is well written and is mostly clear. (1st line on page 4 has a typo, \bar{x}_k in eq (4) should be \bar{x}^l?) Novelty: I am not sure whether the paper adds any significant on top of what we know from Bartlett et al., Elsayed et al. since: (i). The fact that "normalized" margins are strongly correlated with the test set accuracy was shown in Bartlett et al. (figure 1.). A major part of the definition comes from there or from the reference they cite; (ii). Taylor approximation to compute the margin distribution is in Elsayed et al.; (iii). I think the four points listed in page 2 (which make the distinction between related work) is misleading: the way I see it is that the authors use the margin distribution in Elsayed et al which simply overcomes some of the obstacles that norm based margins may face. The only novelty here seems to be that the authors use the margin distribution at each layer. Technical pitfalls: Computing the d_{f,x,i,j} using Equation (3) is missing an absolute value in the numerator as in equation (7) Elsayed et al.. The authors interpret the negative values as misclassification: why is it true? The margin distribution used in Bartlett et al. (below Figure 4 on page 5 in arxiv:1706.08498) uses labeled data and it is obvious in this case to interpreting negative values as misclassification. I don't see how this is true for eq (3) here in this paper. Secondly, why are negative points ignored?? Misclassified points in my opinion are equally important, ignoring the information that a point is misclassified doesn't sound like a great idea. How do the experiments look if we don't ignore them? Experiments: Good set of experiments. However I find the results to be mildly taking the claims of the authors made in four points listed in page 2 away: Section 4.1, "Empirically, we found constructing this only on four evenly-spaced layers, input, and 3 hidden layers, leads to good predictors.". How can the authors explain this? By using linear models, authors implicitly assume that the relationship between generalization gaps and signatures are linear (in Eucledian or log spaces). However, from the experiments (table 1), we see that log models always have better results than linear models. Even assuming linear relationship, I think it is informative to also provide other metrics such as MSE, AIC, BIC etc..